jdbc - Unable to import data from Vertica to Cassandra using Sqoop -



jdbc - Unable to import data from Vertica to Cassandra using Sqoop -

i trying utilize sqoop import table vertica datastax enterprise 4.5. no error nor exception has been reported, no info in target table.

here did:

create keyspace , table in cqlsh:

create keyspace if not exists npa_nxx replication = { 'class': 'simplestrategy', 'replication_factor': '1' }; create table npa_nxx.npa_nxx_data ( part varchar, market varchar, primary key(market));

create alternative table:

cql-import --table dim_location --cassandra-keyspace npa_nxx --cassandra-table npa_nxx_data --cassandra-column-mapping region:region,market:market --connect jdbc:vertica://xx.xxx.xx.xxx:5433/schema --driver com.vertica.jdbc.driver --username xxxxx --password xxx --cassandra-host xx.xxx.xx.xxx

then execute sqoop command:

dse sqoop --options-file /usr/share/dse/demos/sqoop/import.options

and here total output:

14/10/30 09:28:53 warn tool.basesqooptool: setting password on command-line insecure. consider using -p instead. 14/10/30 09:28:53 warn sqoop.connfactory: parameter --driver set explicit driver appropriate connection manager not beingness set (via --connection-manager). sqoop going fall org.apache.sqoop.manager.genericjdbcmanager. please specify explicitly connection manager should used next time. 14/10/30 09:28:53 info manager.sqlmanager: using default fetchsize of 1000 14/10/30 09:28:53 info tool.codegentool: origin code generation 14/10/30 09:28:54 info manager.sqlmanager: executing sql statement: select t.* dim_location t 1=0 14/10/30 09:28:54 info manager.sqlmanager: executing sql statement: select t.* dim_location t 1=0 14/10/30 09:28:54 info orm.compilationmanager: $hadoop_mapred_home not set note: /tmp/sqoop-root/compile/159b8e57e91397f8c48f4455f6da0e5a/dim_location.java uses or overrides deprecated api. note: recompile -xlint:deprecation details. 14/10/30 09:28:55 info orm.compilationmanager: writing jar file: /tmp/sqoop-root/compile/159b8e57e91397f8c48f4455f6da0e5a/dim_location.jar 14/10/30 09:28:55 info mapreduce.importjobbase: origin import of dim_location 14/10/30 09:28:56 info manager.sqlmanager: executing sql statement: select t.* dim_location t 1=0 14/10/30 09:28:56 info snitch.workload: setting workload cassandra 14/10/30 09:28:58 warn util.nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable 14/10/30 09:28:59 info db.datadrivendbinputformat: boundingvalsquery: select min(market), max(market) dim_location 14/10/30 09:28:59 warn db.textsplitter: generating splits textual index column. 14/10/30 09:28:59 warn db.textsplitter: if database sorts in case-insensitive order, may result in partial import or duplicate records. 14/10/30 09:28:59 warn db.textsplitter: encouraged take integral split column. 14/10/30 09:29:00 info mapred.jobclient: running job: job_201410291321_0012 14/10/30 09:29:01 info mapred.jobclient: map 0% cut down 0% 14/10/30 09:29:18 info mapred.jobclient: map 20% cut down 0% 14/10/30 09:29:22 info mapred.jobclient: map 40% cut down 0% 14/10/30 09:29:25 info mapred.jobclient: map 60% cut down 0% 14/10/30 09:29:28 info mapred.jobclient: map 80% cut down 0% 14/10/30 09:29:31 info mapred.jobclient: map 100% cut down 0% 14/10/30 09:29:34 info mapred.jobclient: job complete: job_201410291321_0012 14/10/30 09:29:34 info mapred.jobclient: counters: 18 14/10/30 09:29:34 info mapred.jobclient: job counters 14/10/30 09:29:34 info mapred.jobclient: slots_millis_maps=29652 14/10/30 09:29:34 info mapred.jobclient: total time spent reduces waiting after reserving slots (ms)=0 14/10/30 09:29:34 info mapred.jobclient: total time spent maps waiting after reserving slots (ms)=0 14/10/30 09:29:34 info mapred.jobclient: launched map tasks=5 14/10/30 09:29:34 info mapred.jobclient: slots_millis_reduces=0 14/10/30 09:29:34 info mapred.jobclient: file output format counters 14/10/30 09:29:34 info mapred.jobclient: bytes written=2003 14/10/30 09:29:34 info mapred.jobclient: filesystemcounters 14/10/30 09:29:34 info mapred.jobclient: file_bytes_written=130485 14/10/30 09:29:34 info mapred.jobclient: cfs_bytes_written=2003 14/10/30 09:29:34 info mapred.jobclient: cfs_bytes_read=664 14/10/30 09:29:34 info mapred.jobclient: file input format counters 14/10/30 09:29:34 info mapred.jobclient: bytes read=0 14/10/30 09:29:34 info mapred.jobclient: map-reduce framework 14/10/30 09:29:34 info mapred.jobclient: map input records=98 14/10/30 09:29:34 info mapred.jobclient: physical memory (bytes) snapshot=985702400 14/10/30 09:29:34 info mapred.jobclient: spilled records=0 14/10/30 09:29:34 info mapred.jobclient: cpu time spent (ms)=1260 14/10/30 09:29:34 info mapred.jobclient: total committed heap usage (bytes)=1249378304 14/10/30 09:29:34 info mapred.jobclient: virtual memory (bytes) snapshot=8317739008 14/10/30 09:29:34 info mapred.jobclient: map output records=98 14/10/30 09:29:34 info mapred.jobclient: split_raw_bytes=664 14/10/30 09:29:34 info mapreduce.importjobbase: transferred 0 bytes in 38.8727 seconds (0 bytes/sec) 14/10/30 09:29:34 info mapreduce.importjobbase: retrieved 98 records.

anyone has ideas on what's going on here? thanks!

run below command know files on cfs:

dse hadoop fs -ls <location given in target directory>

jdbc cassandra sqoop datastax-enterprise vertica

Comments

Popular posts from this blog

Delphi change the assembly code of a running process -

json - Hibernate and Jackson (java.lang.IllegalStateException: Cannot call sendError() after the response has been committed) -

C++ 11 "class" keyword -