1.å¦ä½åå¸å¼è¿è¡mapreduceç¨åº
2.å¦ä½ä½¿ç¨Python为Hadoopç¼åä¸ä¸ªç®åçMapReduceç¨åº
3.å¦ä½å¨MaxComputeä¸è¿è¡HadoopMRä½ä¸
4.要成为一名专业的程序员,从零开始需要怎么一步步来比较好,要把最底层的先学精通吗?(个人认为)求学长
å¦ä½åå¸å¼è¿è¡mapreduceç¨åº
ããä¸ã é¦å è¦ç¥éæ¤åæ 转载
ããè¥å¨windowsçEclipseå·¥ç¨ä¸ç´æ¥å¯å¨mapreducç¨åºï¼éè¦å æhadoopé群çé ç½®ç®å½ä¸çxmlé½æ·è´å°srcç®å½ä¸ï¼è®©ç¨åºèªå¨è¯»åé群çå°ååå»è¿è¡åå¸å¼è¿è¡(æ¨ä¹å¯ä»¥èªå·±åjava代ç å»è®¾ç½®jobçconfigurationå±æ§)ã
ããè¥ä¸æ·è´ï¼å·¥ç¨ä¸binç®å½æ²¡æå®æ´çxmlé ç½®æ件ï¼åwindowsæ§è¡çmapreduceç¨åºå ¨é¨éè¿æ¬æºçjvmæ§è¡ï¼ä½ä¸åä¹æ¯å¸¦æâlocal"åç¼çä½ä¸ï¼å¦ job_local_ã è¿ä¸æ¯çæ£çåå¸å¼è¿è¡mapreduceç¨åºã
ãã估计å¾ç 究org.apache.hadoop.conf.Configurationçæºç ï¼åæ£xmlé ç½®æ件ä¼å½±åæ§è¡mapreduce使ç¨çæ件系ç»æ¯æ¬æºçwindowsæ件系ç»è¿æ¯è¿ç¨çhdfsç³»ç»; è¿æå½±åæ§è¡mapreduceçmapperåreducerçæ¯æ¬æºçjvmè¿æ¯é群éé¢æºå¨çjvm
ããäºã æ¬æçç»è®º
ãã第ä¸ç¹å°±æ¯ï¼ windowsä¸æ§è¡mapreduceï¼å¿ é¡»æjarå å°ææslaveèç¹æè½æ£ç¡®åå¸å¼è¿è¡mapreduceç¨åºãï¼ææ个éæ±æ¯è¦windowsä¸è§¦åä¸ä¸ªmapreduceåå¸å¼è¿è¡ï¼
ãã第äºç¹å°±æ¯ï¼ Linuxä¸ï¼åªéæ·è´jaræ件å°é群masterä¸,æ§è¡å½ä»¤hadoop jarPackage.jar MainClassNameå³å¯åå¸å¼è¿è¡mapreduceç¨åºã
ãã第ä¸ç¹å°±æ¯ï¼ æ¨è使ç¨éä¸ï¼å®ç°äºèªå¨æjarå 并ä¸ä¼ ï¼åå¸å¼æ§è¡çmapreduceç¨åºã
ããéä¸ã æ¨è使ç¨æ¤æ¹æ³ï¼å®ç°äºèªå¨æjarå 并ä¸ä¼ ï¼åå¸å¼æ§è¡çmapreduceç¨åºï¼
ãã请å åèåæäºç¯ï¼
ããHadoopä½ä¸æ交åæï¼ä¸ï¼~~ï¼äºï¼
ããå¼ç¨åæçé件ä¸EJob.javaå°ä½ çå·¥ç¨ä¸ï¼ç¶åmainä¸æ·»å å¦ä¸æ¹æ³å代ç ã
ããpublic static File createPack() throws IOException {
ããFile jarFile = EJob.createTempJar("bin");
ããClassLoader classLoader = EJob.getClassLoader();
ããThread.currentThread().setContextClassLoader(classLoader);
ããreturn jarFile;
ãã}
ããå¨ä½ä¸å¯å¨ä»£ç ä¸ä½¿ç¨æå ï¼
ããJob job = Job.getInstance(conf, "testAnaAction");
ããæ·»å ï¼
ããString jarPath = createPack().getPath();
ããjob.setJar(jarPath);
ããå³å¯å®ç°ç´æ¥run as java application å¨windowsè·åå¸å¼çmapreduceç¨åºï¼ä¸ç¨æå·¥ä¸ä¼ jaræ件ã
ããéäºãå¾åºç»è®ºçæµè¯è¿ç¨
ããï¼æªæ空ç书ï¼åªè½éè¿æ笨çæµè¯æ¹æ³å¾åºç»è®ºäºï¼
ããä¸. ç´æ¥éè¿windowsä¸Eclipseå³å»mainç¨åºçjavaæ件ï¼ç¶å"run as application"æéæ©hadoopæ件"run on hadoop"æ¥è§¦åæ§è¡MapReduceç¨åºçæµè¯ã
ãã1ï¼å¦æä¸æjarå å°è¿é群任ælinuxæºå¨ä¸ï¼å®æ¥éå¦ä¸ï¼
ãã[work] -- ::, - org.apache.hadoop.mapreduce.Job - [main] INFO org.apache.hadoop.mapreduce.Job - map 0% reduce 0%
ãã[work] -- ::, - org.apache.hadoop.mapreduce.Job - [main] INFO org.apache.hadoop.mapreduce.Job - Task Id : attempt___m__0, Status : FAILED
ããError: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class bookCount.BookCount$BookCountMapper not found
ããat org.apache.hadoop.conf.Configuration.getClass(Configuration.java:)
ããat org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:)
ããat org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:)
ããat org.apache.hadoop.mapred.MapTask.run(MapTask.java:)
ããat org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:)
ããat java.security.AccessController.doPrivileged(Native Method)
ããat javax.security.auth.Subject.doAs(Subject.java:)
ããat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
ããat org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:)
ããCaused by: java.lang.ClassNotFoundException: Class bookCount.BookCount$BookCountMapper not found
ããat org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:)
ããat org.apache.hadoop.conf.Configuration.getClass(Configuration.java:)
ãã... 8 more
ãã# Error:åéå¤ä¸æ¬¡
ãã-- ::, - org.apache.hadoop.mapreduce.Job - [main] INFO org.apache.hadoop.mapreduce.Job - map % reduce %
ããç°è±¡å°±æ¯ï¼æ¥éï¼æ è¿åº¦ï¼æ è¿è¡ç»æã
ãã
ãã2ï¼æ·è´jarå å°âåªæ¯âé群masterç$HADOOP_HOME/share/hadoop/mapreduce/ç®å½ä¸ï¼ç´æ¥éè¿windowsçeclipse "run as application"åéè¿hadoopæ件"run on hadoop"æ¥è§¦åæ§è¡ï¼å®æ¥éåä¸ã
ããç°è±¡å°±æ¯ï¼æ¥éï¼æ è¿åº¦ï¼æ è¿è¡ç»æã
ãã3ï¼æ·è´jarå å°é群æäºslaveç$HADOOP_HOME/share/hadoop/mapreduce/ç®å½ä¸ï¼ç´æ¥éè¿windowsçeclipse "run as application"åéè¿hadoopæ件"run on hadoop"æ¥è§¦åæ§è¡
ããåæ¥éï¼
ããError: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class bookCount.BookCount$BookCountMapper not found
ããat org.apache.hadoop.conf.Configuration.getClass(Configuration.java:)
ããat org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:)
ããåæ¥éï¼
ããError: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class bookCount.BookCount$BookCountReducer not found
ãã
ããç°è±¡å°±æ¯ï¼ææ¥éï¼ä½ä»ç¶æè¿åº¦ï¼æè¿è¡ç»æã
ãã4ï¼æ·è´jarå å°é群ææslaveç$HADOOP_HOME/share/hadoop/mapreduce/ç®å½ä¸ï¼ç´æ¥éè¿windowsçeclipse "run as application"åéè¿hadoopæ件"run on hadoop"æ¥è§¦åæ§è¡ï¼
ããç°è±¡å°±æ¯ï¼æ æ¥éï¼æè¿åº¦ï¼æè¿è¡ç»æã
ãã第ä¸ç¹ç»è®ºå°±æ¯ï¼ windowsä¸æ§è¡mapreduceï¼å¿ é¡»æjarå å°ææslaveèç¹æè½æ£ç¡®åå¸å¼è¿è¡mapreduceç¨åºã
ããäº å¨Linuxä¸çéè¿ä»¥ä¸å½ä»¤è§¦åMapReduceç¨åºçæµè¯ã
ããhadoop jar $HADOOP_HOME/share/hadoop/mapreduce/bookCount.jar bookCount.BookCount
ãã
ãã1ï¼åªæ·è´å°masterï¼å¨masterä¸æ§è¡ã
ããç°è±¡å°±æ¯ï¼æ æ¥éï¼æè¿åº¦ï¼æè¿è¡ç»æã
ãã2ï¼æ·è´é便ä¸ä¸ªslaveèç¹,å¨slaveä¸æ§è¡ã
ããç°è±¡å°±æ¯ï¼æ æ¥éï¼æè¿åº¦ï¼æè¿è¡ç»æã
ããä½æäºèç¹ä¸è¿è¡ä¼æ¥éå¦ä¸ï¼ä¸è¿è¡ç»æãï¼
ãã// :: INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hduser/.staging/job__
ããException in thread "main" java.lang.NoSuchFieldError: DEFAULT_MAPREDUCE_APPLICATION_CLASSPATH
ããat org.apache.hadoop.mapreduce.v2.util.MRApps.setMRFrameworkClasspath(MRApps.java:)
ããat org.apache.hadoop.mapreduce.v2.util.MRApps.setClasspath(MRApps.java:)
ããat org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:)
ããat org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:)
ããat org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:)
ããat org.apache.hadoop.mapreduce.Job$.run(Job.java:)
ããat org.apache.hadoop.mapreduce.Job$.run(Job.java:)
ããat java.security.AccessController.doPrivileged(Native Method)
ããat javax.security.auth.Subject.doAs(Subject.java:)
ããat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
ããat org.apache.hadoop.mapreduce.Job.submit(Job.java:)
ããat org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:)
ããat com.etrans.anaSpeed.AnaActionMr.run(AnaActionMr.java:)
ããat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
ããat com.etrans.anaSpeed.AnaActionMr.main(AnaActionMr.java:)
ããat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
ããat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
ããat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
ããat java.lang.reflect.Method.invoke(Method.java:)
ããat org.apache.hadoop.util.RunJar.main(RunJar.java:)
ãã第äºç¹ç»è®ºå°±æ¯ï¼ Linuxä¸ï¼åªéæ·è´jaræ件å°é群masterä¸,æ§è¡å½ä»¤hadoop jarPackage.jar MainClassNameå³å¯åå¸å¼è¿è¡mapreduceç¨åºã
å¦ä½ä½¿ç¨Python为Hadoopç¼åä¸ä¸ªç®åçMapReduceç¨åº
MichaelG.Nollå¨ä»çBlogä¸æå°å¦ä½å¨Hadoopä¸ç¨Pythonç¼åMapReduceç¨åºï¼é©å½çgogamzaå¨å ¶Bolgä¸ä¹æå°å¦ä½ç¨Cç¼åMapReduceç¨åºï¼æç¨å¾®ä¿®æ¹äºä¸ä¸åç¨åº,å 为ä»çMap对åè¯åå使ç¨tabé®ï¼ãæå并ä»ä»¬ä¸¤äººçæç« ï¼ä¹è®©å½å çHadoopç¨æ·è½å¤ä½¿ç¨å«çè¯è¨æ¥ç¼åMapReduceç¨åºãããé¦å æ¨å¾é 好æ¨çHadoopé群ï¼è¿æ¹é¢çä»ç»ç½ä¸æ¯è¾å¤ï¼è¿å¿ç»ä¸ªé¾æ¥ï¼Hadoopå¦ä¹ ç¬è®°äºå®è£ é¨ç½²ï¼ãHadoopStreaming帮å©æ们ç¨éJavaçç¼ç¨è¯è¨ä½¿ç¨MapReduceï¼Streamingç¨STDIN(æ åè¾å ¥)åSTDOUT(æ åè¾åº)æ¥åæ们ç¼åçMapåReduceè¿è¡æ°æ®ç交æ¢æ°æ®ãä»»ä½è½å¤ä½¿ç¨STDINåSTDOUTé½å¯ä»¥ç¨æ¥ç¼åMapReduceç¨åºï¼æ¯å¦æ们ç¨Pythonçsys.stdinåsys.stdoutï¼æè æ¯Cä¸çstdinåstdoutãããæ们è¿æ¯ä½¿ç¨Hadoopçä¾åWordCountæ¥å示èå¦ä½ç¼åMapReduceï¼å¨WordCountçä¾åä¸æ们è¦è§£å³è®¡ç®å¨ä¸æ¹ææ¡£ä¸æ¯ä¸ä¸ªåè¯çåºç°é¢çãé¦å æ们å¨Mapç¨åºä¸ä¼æ¥åå°è¿æ¹ææ¡£æ¯ä¸è¡çæ°æ®ï¼ç¶åæ们ç¼åçMapç¨åºæè¿ä¸è¡æç©ºæ ¼åå¼æä¸ä¸ªæ°ç»ã并对è¿ä¸ªæ°ç»éåæ"1"ç¨æ åçè¾åºè¾åºæ¥ï¼ä»£è¡¨è¿ä¸ªåè¯åºç°äºä¸æ¬¡ãå¨Reduceä¸æ们æ¥ç»è®¡åè¯çåºç°é¢çãããããPythonCodeããMap:mapper.pyãã#!/usr/bin/envpythonimportsys#mapswordstotheircountsword2count={ }#inputcomesfromSTDIN(standardinput)forlineinsys.stdin:#removeleadingandtrailingwhitespaceline=line.strip()#splitthelineintowordswhileremovinganyemptystringswords=filter(lambdaword:word,line.split())#increasecountersforwordinwords:#writetheresultstoSTDOUT(standardoutput);#whatweoutputherewillbetheinputforthe#Reducestep,i.e.theinputforreducer.py##tab-delimited;thetrivialwordcountis1print'%s\t%s'%(word,1)ããReduce:reducer.pyãã#!/usr/bin/envpythonfromoperatorimportitemgetterimportsys#mapswordstotheircountsword2count={ }#inputcomesfromSTDINforlineinsys.stdin:#removeleadingandtrailingwhitespaceline=line.strip()#parsetheinputwegotfrommapper.pyword,count=line.split()#convertcount(currentlyastring)tointtry:count=int(count)word2count[word]=word2count.get(word,0)+countexceptValueError:#countwasnotanumber,sosilently#ignore/discardthislinepass#sortthewordslexigraphically;##thisstepisNOTrequired,wejustdoitsothatour#finaloutputwilllookmoreliketheofficialHadoop#wordcountexamplessorted_word2count=sorted(word2count.items(),key=itemgetter(0))#writetheresultstoSTDOUT(standardoutput)forword,countinsorted_word2count:print'%s\t%s'%(word,count)ããCCodeããMap:Mapper.cãã#include#include#include#include#defineBUF_SIZE#defineDELIM"\n"intmain(intargc,char*argv[]){ charbuffer[BUF_SIZE];while(fgets(buffer,BUF_SIZE-1,stdin)){ intlen=strlen(buffer);if(buffer[len-1]=='\n')buffer[len-1]=0;char*querys=index(buffer,'');char*query=NULL;if(querys==NULL)continue;querys+=1;/*nottoinclude'\t'*/query=strtok(buffer,"");while(query){ printf("%s\t1\n",query);query=strtok(NULL,"");}}return0;}h>h>h>h>ããReduce:Reducer.cãã#include#include#include#include#defineBUFFER_SIZE#defineDELIM"\t"intmain(intargc,char*argv[]){ charstrLastKey[BUFFER_SIZE];charstrLine[BUFFER_SIZE];intcount=0;*strLastKey='\0';*strLine='\0';while(fgets(strLine,BUFFER_SIZE-1,stdin)){ char*strCurrKey=NULL;char*strCurrNum=NULL;strCurrKey=strtok(strLine,DELIM);strCurrNum=strtok(NULL,DELIM);/*necessarytocheckerrorbut.*/if(strLastKey[0]=='\0'){ strcpy(strLastKey,strCurrKey);}if(strcmp(strCurrKey,strLastKey)){ printf("%s\t%d\n",strLastKey,count);count=atoi(strCurrNum);}else{ count+=atoi(strCurrNum);}strcpy(strLastKey,strCurrKey);}printf("%s\t%d\n",strLastKey,count);/*flushthecount*/return0;}h>h>h>h>ããé¦å æ们è°è¯ä¸ä¸æºç ï¼ããchmod+xmapper.pychmod+xreducer.pyecho"foofooquuxlabsfoobarquux"|./mapper.py|./reducer.pybar1foo3labs1quux2g++Mapper.c-oMapperg++Reducer.c-oReducerchmod+xMapperchmod+xReducerecho"foofooquuxlabsfoobarquux"|./Mapper|./Reducerbar1foo2labs1quux1foo1quux1ããä½ å¯è½çå°Cçè¾åºåPythonçä¸ä¸æ ·,å 为Pythonæ¯æä»æ¾å¨è¯å ¸éäº.æ们å¨Hadoopæ¶,ä¼å¯¹è¿è¿è¡æåº,ç¶åç¸åçåè¯ä¼è¿ç»å¨æ åè¾åºä¸è¾åº.ããå¨Hadoopä¸è¿è¡ç¨åºããé¦å æ们è¦ä¸è½½æ们çæµè¯ææ¡£wget页é¢ä¸æä¸çç¨phpç¼åçMapReduceç¨åº,ä¾phpç¨åºååèï¼Map:mapper.phpãã#!/usr/bin/php$word2count=array();//inputcomesfromSTDIN(standardinput)while(($line=fgets(STDIN))!==false){ //removeleadingandtrailingwhitespaceandlowercase$line=strtolower(trim($line));//splitthelineintowordswhileremovinganyemptystring$words=preg_split('/\W/',$line,0,PREG_SPLIT_NO_EMPTY);//increasecountersforeach($wordsas$word){ $word2count[$word]+=1;}}//writetheresultstoSTDOUT(standardoutput)//whatweoutputherewillbetheinputforthe//Reducestep,i.e.theinputforreducer.pyforeach($word2countas$word=>$count){ //tab-delimitedecho$word,chr(9),$count,PHP_EOL;}?>ããReduce:mapper.phpãã#!/usr/bin/php$word2count=array();//inputcomesfromSTDINwhile(($line=fgets(STDIN))!==false){ //removeleadingandtrailingwhitespace$line=trim($line);//parsetheinputwegotfrommapper.phplist($word,$count)=explode(chr(9),$line);//convertcount(currentlyastring)toint$count=intval($count);//sumcountsif($count>0)$word2count[$word]+=$count;}//sortthewordslexigraphically////thissetisNOTrequired,wejustdoitsothatour//finaloutputwilllookmoreliketheofficialHadoop//wordcountexamplesksort($word2count);//writetheresultstoSTDOUT(standardoutput)foreach($word2countas$word=>$count){ echo$word,chr(9),$count,PHP_EOL;}?>ããä½è ï¼é©¬å£«åå表äºï¼--
å¦ä½å¨MaxComputeä¸è¿è¡HadoopMRä½ä¸
MaxComputeï¼åODPSï¼æä¸å¥èªå·±çMapReduceç¼ç¨æ¨¡ååæ¥å£ï¼ç®å说æ¥ï¼è¿å¥æ¥å£çè¾å ¥è¾åºé½æ¯MaxComputeä¸çTableï¼å¤ççæ°æ®æ¯ä»¥Record为ç»ç»å½¢å¼çï¼å®å¯ä»¥å¾å¥½å°æè¿°Tableä¸çæ°æ®å¤çè¿ç¨ï¼ç¶èä¸ç¤¾åºçHadoopç¸æ¯ï¼ç¼ç¨æ¥å£å·®å¼è¾å¤§ãHadoopç¨æ·å¦æè¦å°åæ¥çHadoop MRä½ä¸è¿ç§»å°MaxComputeçMRæ§è¡ï¼éè¦éåMRç代ç ï¼ä½¿ç¨MaxComputeçæ¥å£è¿è¡ç¼è¯åè°è¯ï¼è¿è¡æ£å¸¸ååææä¸ä¸ªJarå æè½æ¾å°MaxComputeçå¹³å°æ¥è¿è¡ãè¿ä¸ªè¿ç¨ååç¹çï¼éè¦èè´¹å¾å¤çå¼ååæµè¯äººåãå¦æè½å¤å®å ¨ä¸æ¹æè å°éå°ä¿®æ¹åæ¥çHadoop MR代ç å°±è½å¨MaxComputeå¹³å°ä¸è·èµ·æ¥ï¼å°æ¯ä¸ä¸ªæ¯è¾çæ³çæ¹å¼ã
ç°å¨MaxComputeå¹³å°æä¾äºä¸ä¸ªHadoopMRå°MaxCompute MRçéé å·¥å ·ï¼å·²ç»å¨ä¸å®ç¨åº¦ä¸å®ç°äºHadoop MRä½ä¸çäºè¿å¶çº§å«çå ¼å®¹ï¼å³ç¨æ·å¯ä»¥å¨ä¸æ¹ä»£ç çæ åµä¸éè¿æå®ä¸äºé ç½®ï¼å°±è½å°åæ¥å¨Hadoopä¸è¿è¡çMR jarå æ¿è¿æ¥ç´æ¥è·å¨MaxComputeä¸ãç®å该æ件å¤äºæµè¯é¶æ®µï¼ææ¶è¿ä¸è½æ¯æç¨æ·èªå®ä¹comparatoråèªå®ä¹keyç±»åï¼ä¸é¢å°ä»¥WordCountç¨åºä¸ºä¾ï¼ä»ç»ä¸ä¸è¿ä¸ªæ件çåºæ¬ä½¿ç¨æ¹å¼ã
使ç¨è¯¥æ件å¨MaxComputeå¹³å°è·ä¸ä¸ªHadoopMRä½ä¸çåºæ¬æ¥éª¤å¦ä¸ï¼
1. ä¸è½½HadoopMRçæ件
ä¸è½½æ件ï¼å å为hadoop2openmr-1.0.jarï¼æ³¨æï¼è¿ä¸ªjaréé¢å·²ç»å å«hadoop-2.7.2çæ¬çç¸å ³ä¾èµï¼å¨ä½ä¸çjarå ä¸è¯·ä¸è¦æºå¸¦hadoopçä¾èµï¼é¿å çæ¬å²çªã
2. åå¤å¥½HadoopMRçç¨åºjarå
ç¼è¯å¯¼åºWordCountçjarå ï¼wordcount_test.jar ï¼wordcountç¨åºçæºç å¦ä¸:
package com.aliyun.odps.mapred.example.hadoop;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
import java.util.StringTokenizer;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
3. æµè¯æ°æ®åå¤
å建è¾å ¥è¡¨åè¾åºè¡¨
create table if not exists wc_in(line string);
create table if not exists wc_out(key string, cnt bigint);
éè¿tunnelå°æ°æ®å¯¼å ¥è¾å ¥è¡¨ä¸
å¾ å¯¼å ¥ææ¬æ件data.txtçæ°æ®å 容å¦ä¸ï¼
hello maxcompute
hello mapreduce
ä¾å¦å¯ä»¥éè¿å¦ä¸å½ä»¤å°data.txtçæ°æ®å¯¼å ¥wc_inä¸ï¼
tunnel upload data.txt wc_in;
4. åå¤å¥½è¡¨ä¸hdfsæ件路å¾çæ å°å ³ç³»é ç½®
é ç½®æ件å½å为ï¼wordcount-table-res.conf
{
"file:/foo": {
"resolver": {
"resolver": "c.TextFileResolver",
"properties": {
"text.resolver.columns.combine.enable": "true",
"text.resolver.seperator": "\t"
}
},
"tableInfos": [
{
"tblName": "wc_in",
"partSpec": { },
"label": "__default__"
}
],
"matchMode": "exact"
},
"file:/bar": {
"resolver": {
"resolver": "openmr.resolver.BinaryFileResolver",
"properties": {
"binary.resolver.input.key.class" : "org.apache.hadoop.io.Text",
"binary.resolver.input.value.class" : "org.apache.hadoop.io.LongWritable"
}
},
"tableInfos": [
{
"tblName": "wc_out",
"partSpec": { },
"label": "__default__"
}
],
"matchMode": "fuzzy"
}
}
要成为一名专业的程序员,从零开始需要怎么一步步来比较好,要把最底层的先学精通吗?(个人认为)求学长
前言
你是否觉得自己从学校毕业的时候只做过小玩具一样的程序?走入职场后哪怕没有什么经验也可以把以下这些课外练习走一遍(朋友的抱怨:学校课程总是从理论出发,作业项目都看不出有什么实际作用,PHP 量化交易源码不如从工作中的需求出发)
建议:
不要乱买书,不要乱追新技术新名词,基础的东西经过很长时间积累而且还会在未来至少年通用。
回顾一下历史,看看历史上时间线上技术的发展,你才能明白明天会是什么样。
一定要动手,例子不管多么简单,建议至少自己手敲一遍看看是否理解了里头的细枝末节。
一定要学会思考,思考为什么要这样,而不是那样。还要举一反三地思考。俄罗斯方块html源码
注:你也许会很奇怪为什么下面的东西很偏Unix/Linux,这是因为我觉得Windows下的编程可能会在未来很没有前途,原因如下:
现在的用户界面几乎被两个东西主宰了,1)Web,2)移动设备iOS或Android。Windows的图形界面不吃香了。
越来越多的企业在用成本低性能高的Linux和各种开源技术来构架其系统,Windows的成本太高了。
微软的微信后台管理系统源码东西变得太快了,很不持久,他们完全是在玩弄程序员。详情参见《Windows编程革命史》
所以,我个人认为以后的趋势是前端是Web+移动,后端是Linux+开源。开发这边基本上没Windows什么事。
启蒙入门
1、 学习一门脚本语言,例如Python/Ruby
可以让你摆脱对底层语言的布林线主图源码恐惧感,脚本语言可以让你很快开发出能用得上的小程序。实践项目:
处理文本文件,或者csv (关键词 python csv, python open, python sys) 读一个本地文件,逐行处理(例如 word count,或者处理log)
遍历本地文件系统 (sys, os, path),例如写一个程序统计一个目录下所有文件大小并按各种条件排序并保存结果
跟数据库打交道 (python sqlite),写一个小脚本统计数据库里条目数量
学会用各种print之类简单粗暴的方式进行调试
学会用Google (phrase, domain, use reader to follow tech blogs)
为什么要学脚本语言,因为他们实在是太方便了,很多时候我们需要写点小工具或是c 代码生成器源码脚本来帮我们解决问题,你就会发现正规的编程语言太难用了。
2、 用熟一种程序员的编辑器(不是IDE) 和一些基本工具
Vim / Emacs / Notepad++,学会如何配置代码补全,外观,外部命令等。
Source Insight (或 ctag)
使用这些东西不是为了Cool,而是这些编辑器在查看、修改代码/配置文章/日志会更快更有效率。
3、 熟悉Unix/Linux Shell和常见的命令行
如果你用windows,至少学会用虚拟机里的linux, vmware player是免费的,装个Ubuntu吧
一定要少用少用图形界面。
学会使用man来查看帮助
文件系统结构和基本操作 ls/chmod/chown/rm/find/ln/cat/mount/mkdir/tar/gzip …
学会使用一些文本操作命令 sed/awk/grep/tail/less/more …
学会使用一些管理命令 ps/top/lsof/netstat/kill/tcpdump/iptables/dd…
了解/etc目录下的各种配置文章,学会查看/var/log下的系统日志,以及/proc下的系统运行信息
了解正则表达式,使用正则表达式来查找文件。
对于程序员来说Unix/Linux比Windows简单多了。(参看我四年前CSDN的博文《其实Unix很简单》)学会使用Unix/Linux你会发现图形界面在某些时候实在是太难用了,相当地相当地降低工作效率。
4、 学习Web基础(HTML/CSS/JS) + 服务器端技术 (LAMP)
未来必然是Web的世界,学习WEB基础的最佳网站是W3School。
学习HTML基本语法
学习CSS如何选中HTML元素并应用一些基本样式(关键词:box model)
学会用 Firefox + Firebug 或 chrome 查看你觉得很炫的网页结构,并动态修改。
学习使用Javascript操纵HTML元件。理解DOM和动态网页(Dynamic HTML: The Definitive Reference, 3rd Edition - O'Reilly Media) 网上有免费的章节,足够用了。或参看 DOM 。
学会用 Firefox + Firebug 或 chrome 调试Javascript代码(设置断点,查看变量,性能,控制台等)
在一台机器上配置Apache 或 Nginx
学习PHP,让后台PHP和前台HTML进行数据交互,对服务器相应浏览器请求形成初步认识。实现一个表单提交和反显的功能。
把PHP连接本地或者远程数据库 MySQL(MySQL 和 SQL现学现用够了)
跟完一个名校的网络编程课程(例如:(升级版为Kyoto Cabinet)、Flare、MongoDB、CouchDB、Cassandra、Voldemort等。