\nWelcome .
get test<\/p>
\/\/curl -X GET -H \"Content-Type:application\/json\" -H \"Authorization: token 4e56266f2502936e0378ea6a985dc74a5bec4280\" http:\/\/user.endv.cn\/v1\/datastreams\/plug-status\/datapoint\/ \n $url = \"http:\/\/localhost\/web_services.php\";\n $post_data = array (\"username\" => \"bob\",\"key\" => \"12345\");\n $ch = curl_init();\n curl_setopt($ch, CURLOPT_URL, $url);\n curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);\n \n $output = curl_exec($ch);\n curl_close($ch);\n \n \/\/打印獲得的數(shù)據(jù)\n print_r($output);<\/pre><\/p>
<\/p>
A brief introduction to the use of curl<\/p>
Curl is a very powerful http command line tool under Linux, and its functions are very powerful. <\/p>
1) Without further ado, let’s start from here! <\/p>
$ curl http:\/\/code.endv.cn<\/pre>After pressing Enter, the html of code.endv.cn will be displayed on the screen~<\/p>
2) Well, if you want to save the page you have read, should you do this? Woolen cloth? <\/p>
$ curl http:\/\/code.endv.cn > page.html<\/pre>Of course you can, but it doesn’t have to be so troublesome! <\/p>
Just use curl's built-in option. To save the http results, use this option: -o<\/p>
$ curl -o page.html http:\/\/code.endv.cn<\/pre>In this way, you can see a download page progress indicator appear on the screen. When the progress reaches 100%, it will be OK<\/p>
3) What? ! Can’t access? It must be that your proxy is not configured. <\/p>
When using curl, you can use this option to specify the proxy server and port used for http access: -x<\/p>
$ curl -x 123.45.67.89:1080 -o page.html http:\/\/code.endv.cn<\/pre>4) It is annoying when visiting some websites. They use cookies to record session information. <\/p>
Browsers like IE\/NN can certainly handle cookie information easily, but what about our curl? ..... <\/p>
Let’s learn this option: -D <— This is to save the cookie information in the http response into a special file <\/p>
$ curl -x 123.45.67.89:1080 -o page.html -D cookie0001.txt http:\/\/code.endv.cn<\/pre>In this way, when When the page is saved to page.html, the cookie information is also saved to cookie0001.txt<\/p>
5) So, how to continue to use the cookie information left last time the next time you visit? You know, many websites rely on monitoring your cookie information to determine whether you are visiting their website in violation of the rules. <\/p>
This time we use this option to append the last cookie information to the http request: -b<\/p>
$ curl -x 123.45.67.89:1080 -o page1.html -D cookie0002.txt -b cookie0001.txt http:\/\/code.endv.cn<\/pre>In this way, we can simulate almost all IE operations to access the web page ! <\/p>
6) Wait a moment~I seem to have forgotten something~<\/p>
That’s right! It’s browser information<\/p>
Some annoying websites always require us to use certain specific browsers to access them. Sometimes, what’s more, we have to use certain specific versions of NND. Where do we have time for it? Go find these weird browsers! ? <\/p>
Fortunately, curl provides us with a useful option, which allows us to arbitrarily specify the browser information we declare for this visit: -A<\/p>
$ curl -A \"Mozilla\/4.0 (compatible; MSIE 6.0; Windows NT 5.0)\" -x 123.45.67.89:1080 -o page.html -D cookie0001.txt http:\/\/code.endv.cn<\/pre>In this way, the server receives When requesting access, you will be considered to be an IE6.0 running on Windows 2000. Hey, hey, in fact, maybe you are using a Mac! <\/p>
And \"Mozilla\/4.73 [en] (X11; U; Linux 2.2; 15 i686\" can tell the other party that you are running Linux on a PC and using Netscape 4.73, hahaha<\/p>
7) Another commonly used restriction method on the server side is to check the referer for http access. For example, if you visit the homepage first, and then visit the download page specified there, the referer address of the second visit will be the page address after the first successful visit. In this way, as long as the server finds that the referer address of a certain visit to the download page is not the address of the home page, it can conclude that it is a stolen connection ~<\/p>
hate hate~I just want to steal the connection~! ! <\/p>
Fortunately, curl provides us with the option to set the referer: -e<\/p>
$ curl -A \"Mozilla\/4.0 (compatible; MSIE 6.0; Windows NT 5.0)\" -x 123.45.67.89:1080 -e \"mail.linuxidc.com\" -o page.html -D cookie0001.txt http:\/\/code.endv.cn<\/pre>In this way, you can deceive the other party's server. You clicked a link from mail.linuxidc.com. , Hahaha<\/p>
8) As I write, I find that I have missed something important! ——-Use curl to download files<\/p>
As I just said, you can use -o to download a page into a file, and the same is true for downloading files. For example, <\/p>
$ curl -o 1.jpg http:\/\/img.endv.cn\/~zzh\/screen1.JPG<\/pre>Here we teach you a new option: -O capital O, use it like this: <\/p>
$ curl -O http:\/\/img.endv.cn\/~zzh\/screen1.JPG<\/pre>In this way, you can automatically save it locally according to the file name on the server! <\/p>
One more useful one. <\/p>
If in addition to screen1.JPG there are screen2.JPG, screen3.JPG, ...., screen10.JPG that need to be downloaded, is it possible that we need to write a script to complete these operations? <\/p>
Don’t do it! <\/p>
In curl, just write like this: <\/p>
$ curl -O http:\/\/img.endv.cn\/~zzh\/screen[1-10].JPG<\/pre>Hahaha, isn’t it awesome? ! ~<\/p>
9) Come again, let’s continue to explain downloading! <\/p>
$ curl -O http:\/\/img.endv.cn\/~{zzh,nick}\/[001-201].JPG<\/pre>The download generated in this way is <\/p>
~zzh\/001.JPG\n~zzh\/002.JPG\n...\n~zzh\/201.JPG\n~nick\/001.JPG\n~nick\/002.JPG\n...\n~nick\/201.JPG<\/pre>convenient enough, right? Hahaha<\/p>
Eh? It's too early to be happy. <\/p>
Since the file names under zzh\/nick are all 001, 002..., 201, the downloaded files have the same name, and the later ones overwrite the previous files ~<\/p>
沒關(guān)系,我們還有更狠的!<\/p>
$ curl -o #2_#1.jpg http:\/\/img.endv.cn\/~{zzh,nick}\/[001-201].JPG<\/pre>—這是.....自定義文件名的下載? —對頭,呵呵!<\/p>
這樣,自定義出來下載下來的文件名,就變成了這樣:原來: ~zzh\/001.JPG —-> 下載后: 001-zzh.JPG 原來: ~nick\/001.JPG —-> 下載后: 001-nick.JPG<\/p>
這樣一來,就不怕文件重名啦,呵呵<\/p>
9)繼續(xù)講下載<\/p>
我們平時在windows平臺上,flashget這樣的工具可以幫我們分塊并行下載,還可以斷線續(xù)傳。curl在這些方面也不輸給誰,嘿嘿<\/p>
比如我們下載screen1.JPG中,突然掉線了,我們就可以這樣開始續(xù)傳<\/p>
$ curl -c -O http:\/\/cgi2.tky.3wb.ne.jp\/~zzh\/screen1.JPG<\/pre>當然,你不要拿個flashget下載了一半的文件來糊弄我 別的下載軟件的半截文件可不一定能用哦 ~<\/p>
分塊下載,我們使用這個option就可以了: -r<\/p>
舉例說明<\/p>
比如我們有一個http:\/\/img.endv.cn\/~zzh\/zhao1.mp3 要下載(趙老師的電話朗誦 :D )我們就可以用這樣的命令:<\/p>
$ curl -r 0-10240 -o \"zhao.part1\" http:\/img.endv.cn\/~zzh\/zhao1.mp3 &\\\n$ curl -r 10241-20480 -o \"zhao.part1\" http:\/img.endv.cn\/~zzh\/zhao1.mp3 &\\\n$ curl -r 20481-40960 -o \"zhao.part1\" http:\/img.endv.cn\/~zzh\/zhao1.mp3 &\\\n$ curl -r 40961- -o \"zhao.part1\" http:\/img.endv.cn\/~zzh\/zhao1.mp3<\/pre>這樣就可以分塊下載啦。不過你需要自己把這些破碎的文件合并起來如果你用UNIX或蘋果,用 cat zhao.part* > zhao.mp3就可以如果用的是Windows,用copy \/b 來解決吧,呵呵<\/p>
上面講的都是http協(xié)議的下載,其實ftp也一樣可以用。用法嘛,<\/p>
$ curl -u name:passwd ftp:\/\/ip:port\/path\/file<\/pre>或者大家熟悉的<\/p>
$ curl ftp:\/\/name:passwd@ip:port\/path\/file<\/pre>10) 說完了下載,接下來自然該講上傳咯上傳的option是 -T<\/p>
比如我們向ftp傳一個文件:<\/p>
$ curl -T localfile -u name:passwd ftp:\/\/upload_site:port\/path\/<\/pre>當然,向http服務器上傳文件也可以比如<\/p>
$ curl -T localfile http:\/\/img.endv.cn\/~zzh\/abc.cgi<\/pre>注意,這時候,使用的協(xié)議是HTTP的PUT method<\/p>
剛才說到PUT,嘿嘿,自然讓老服想起來了其他幾種methos還沒講呢! GET和POST都不能忘哦。<\/p>
http提交一個表單,比較常用的是POST模式和GET模式<\/p>
GET模式什么option都不用,只需要把變量寫在url里面就可以了比如:<\/p>
$ curl http:\/\/code.endv.cn\/login.cgi?user=nickwolfe&password=12345<\/pre>而POST模式的option則是 -d<\/p>
比如,<\/p>
$ curl -d \"user=nickwolfe&password=12345\" http:\/\/code.endv.cn\/login.cgi<\/pre>就相當于向這個站點發(fā)出一次登陸申請 ~<\/p>
到底該用GET模式還是POST模式,要看對面服務器的程序設(shè)定。<\/p>
一點需要注意的是,POST模式下的文件上的文件上傳,比如<\/p>