-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explore storing continuous ingest bulk import files in S3 #94
Comments
I suspect the procedure for this would be the following :
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#S3A |
Current bulk import test docs : bulk-test.md |
Below are some notes from copying bulk data from HDFS to S3 # bulk files were generated intp /tmp/bt dir in hdfs
# prep directory before distp, assuming all splits files are the same just keep one
hadoop fs -mv /tmp/bt/1/splits.txt /tmp/bt
hadoop fs -rm /tmp/bt/*/splits.txt
hadoop fs -rm /tmp/bt/*/files/_SUCCESS
# get the S3 libs on the local hadoop classpath
# edit following file and set : export HADOOP_OPTIONAL_TOOLS="hadoop-aws"
vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh
# The remote map reduce jobs will need the S3 jars on the classpath, define the following for this.. may need to change for your version of hadoop
export LIBJARS=$HADOOP_HOME/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.375.jar,$HADOOP_HOME/share/hadoop/tools/lib/hadoop-aws-3.2.0.jar
# the following command will distcp files to bucket
hadoop distcp -libjars ${LIBJARS} -Dfs.s3a.access.key=$AWS_KEY -Dfs.s3a.secret.key=$AWS_SECRET hdfs://leader1:8020/tmp/bt s3a://$AWS_BUCKET/continuous-1000 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When running bulk import continuous ingest test it can take a while to generate a good bit of data to start testing. Not sure, but it may be faster to generate a data set once and store it in S3. Then future test could possibly use that data set.
I think it would be interesting to experiment with this and if it works well add documentation to the bulk import test docs explaining how to do it. One gotcha with this approach is that anyone running a test needs to be consistent with split points. A simple way to address this problem would be store a file of split points in S3 with the data.
The text was updated successfully, but these errors were encountered: