![]() | Except where otherwise noted, this document is licensed under Creative Commons Attribution ShareAlike 3.0 License |
Hortonworks Data Platform (HDP) and any of its components are not anticipated to be combined with any hardware, software or data, except as expressly recommended in this documentation.
Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain free and open source.
Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to Contact Us directly to discuss your specific needs.
Contents
- 1. Using Data Integration Services Powered by Talend
- 2. Using HDP for Metadata Services (HCatalog)
- 3. Using Apache Hive
- 4. Using HDP for Workflow and Scheduling (Oozie)
- 5. Using Apache Sqoop
- 1. Apache Sqoop Connectors
- 2. Sqoop Import Table Commands
- 3. Netezza Connector
- 4. Sqoop-HCatalog Integration
- 4.1. HCatalog Background
- 4.2. Exposing HCatalog Tables to Sqoop
- 4.3. Automatic Table Creation
- 4.4. Delimited Text Formats and Field and Line Delimiter Characters
- 4.5. HCatalog Table Requirements
- 4.6. Support for Partitioning
- 4.7. Schema Mapping
- 4.8. Support for HCatalog Data Types
- 4.9. Providing Hive and HCatalog Libraries for the Sqoop Job
- 4.10. Examples
- 6. Installing and Configuring Flume in HDP
- 7. Using Cascading
List of Tables