How do I get cURL code to compile on Windows 10? - javascript

I am following an online tutorial about creating a javascript-based server/client environment, and to test a POST method, the author gave a block of cURL code to run. When I try to run it, however, I get errors.
I have done some research, and I'm fairly sure that the provided code is for a Linux environment, but I'm operating on Windows 10. I tried changing the \ characters to ^ but I'm still getting errors. I have used both the cmd prompt and PowerShell.
curl -X POST \
http://localhost:3000/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test5#test.com",
"password": "1234",
"name": "test5"
}'
I expected the code to post data to my database, but instead I get the following output:
C:\Users\ricks>curl -X POST \
curl: (6) Could not resolve host: \
C:\Users\ricks> http://localhost:3000/signup \
'http:' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks> -H 'Content-Type: application/json' \
'-H' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks> -d '{
'-d' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"email": "test5#test.com",
'"email":' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"password": "1234",
'"password":' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"name": "test5"
'"name":' is not recognized as an internal or external command,
operable program or batch file.

While all of the answers provided would undoubtedly lead to me being able to run the cURL code I posted above, I found a workaround using only the Windows cmd prompt.
First, I compiled the code in a single line rather than multiple. Second, I discovered that the compile errors came primarily from un-escaped " characters. In the end, the following code worked correctly and POSTed data to my database!
curl -X POST http://localhost:3000/signup -H "Content-Type:application/json" -d "{\"email\" : \"test4#test.com\", \"password\" : \"1234\", \"name\": \"test5\" }"
While this is likely not the most sustainable approach, it was a good learning moment for me and it might help those looking to utilize a one-time cURL execution without downloading anything extra to their system.

first download & install Cygwin, and make sure to install the gcc-core and g++ and make and autoconf and automake and git and libtool and m4 packages during package selection (there may be more required, if you get a compile-error, i can probably deduce the missing package by your complication error, let me know.),
and i guess you want your curl installation to support TLS/HTTPS? (almost all websites require httpS these days) in which case, you gotta start by compiling openssl,
(replace 1_1_1c with the latest openssl stable release... it is 1.1.1c as of writing.)
git clone -b 'OpenSSL_1_1_1c' --single-branch --depth 1 https://github.com/openssl/openssl
cd openssl
./config no-shared
make -j $(nproc)
(the last step may take a while) but openSSL's build script does not create a lib folder, but curl's build script expect the lib files to be in a lib folder inside the openssl folder, so after that, run
mkdir lib
cp *.a lib;
after openssl is compiled, cd .. out of there, it's time to make curl,
(replace 7_65_3 with whatever the latest stable curl release is. as of writing, it is 7.65.3)
git clone -b 'curl-7_65_3' --single-branch --depth 1 https://github.com/curl/curl.git
cd curl
./buildconf
LDFLAGS="-static" ./configure --with-ssl=$(realpath ../openssl) --disable-shared --enable-static
make -j $(nproc)
(if you wonder why i used realpath: there appears to be a bug in curl's buildscript that makes it fail if you supply a relative path, so an absolute path is required, it seems. if you wonder why i made a static build aka --disable-shared --enable-static, you may have a different libopenssl library in your $PATH, so to avoid a variation of DLL Hell, a static build is safer.)
and finally you have your own curl build in the relative path ./src/curl after make has finished, which you can run with:
./src/curl -X POST \
http://localhost:3000/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test5#test.com",
"password": "1234",
"name": "test5"
}'
in a cygwin terminal. (which has the same syntax as linux terminals)
........... alternatively, you can just install the curl package from cygwin, and use cygwin's pre-compiled curl, but since you asked specifically how to COMPILE curl, there's your answer. i recommend you avoid the hassle and just install cygwin's pre-compiled curl.

I tried changing the \ characters to ^ but I'm still getting errors
This is not even the start of the problems in migrating the code. There are compiled versions available on application's home site for Windows 32 and 64 bit with and without Cygwin already available.

Related

Is there a way to send browser console messages to a remote syslog (or other) server?

I would like the console output of my web application to a centralized logging system. I am very flexible on the recipient side (syslog|whatever server).
There are plenty of packages to send messages to servers (and a quick solution would be to fetch to an ELK stack for instance).
This is not what I am looking for: I would like the console errors/warnings to be forwarded (and in addition the application-generated ones, but that part is easy).
Is there a consensual solution for this problem?
More of a POC than a solution perhaps but Remote Debuging Protocol can be used. With Firefox
1- Start a debugging server, the console of this instance will be captured. Pages to debug should be opened here. Dev tools must be opened.
kdesu -u lmc -c "firefox --start-debugger-server 9123"
2- Start the debug Firefox client by opening about:debugging and connecting to the debug server. This instance might work a receiver for the packets only.
3- Use tshark to capture packets. Output can be sent to the logging facility.
tshark -i lo -f 'tcp port 9123' -Y 'data.data contains "pageError"' -T ek -e tcp.payload | jq -r '.layers.tcp_payload[0]' - | head -n2 | xxd -r -p
Capturing on 'Loopback: lo'
Output
These json object might be sent to logging facility.
3 682:{"type":"pageError","pageError":{"errorMessage":"This page uses the non standard property “zoom”. Consider using calc() in the relevant property values, or using “transform” along with “transform-origin: 0 0”.","errorMessageName":"","sourceName":"https://www.acordesdcanciones.com/2020/12/la-abandone-y-no-sabia-chino-hidalgo.html","sourceId":null,"lineText":"","lineNumber":0,"columnNumber":0,"category":"Layout","innerWindowID":10737418988,"timeStamp":1629339439629,"warning":true,"error":false,"info":false,"private":false,"stacktrace":null,"notes":null,"chromeContext":false,"cssSelectors":"","isPromiseRejection":false},"from":"server1.conn1.child83/consoleActor2"}
Another tshark one-liner
tshark -i lo -f 'tcp port 9123' -Y 'data.data contains "pageError"' -T fields -e data.data | tail -n +6 | head -n3 | xxd -r -p >> data.json

Convert Pcap file to text using python script

I used the below code to convert a pcap file to text file with the given columns. The output text file. Code doesn't give any error and it gives output but it gives it empty without any data. can you help please to find the error
from scapy.all import *
data = "Emotet-infection-with-Gootkit.pcap"
a = rdpcap(data)
os.system("tshark -T fields -e _ws.col.Info -e http -e frame.time -e "
"data.data -w Emotet-infection-with-Gootkit.pcap > Emotet-infection-with-Gootkit.txt -c 1000")
os.system("tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit.pcap")
sessions = a.sessions()
i = 1
for session in sessions:
http_payload = ""
tshark -F k12text -r a.pcap -w a.txt.
"K12 text format" is a text packet capture format; it's what some Tektronix equipment can write out - in that sense, it's similar to writing out the raw hex data, plus some metadata. However, from a user-interface sense, it's more like "Save As..." in Wireshark, because it's a capture file format.
I guess this should work for you
The two tshark commands you're running are:
tshark -T fields -e _ws.col.Info -e http -e frame.time -e data.data -w Emotet-infection-with-Gootkit.pcap > Emotet-infection-with-Gootkit.txt -c 1000
That command will do a live capture from the default interface, write 1000 captured packets to a capture file named Emotet-infection-with-Gootkit.pcap (probably pcapng, not pcap, as pcapng has been Wireshark's default capture format for many years now), and write the Info column, the word "http" for HTTP packets and nothing for non-HTTP packets, the time stamp, and any otherwise not dissected data out to Emotet-infection-with-Gootkit.txt as text.
tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit.pcap
That will read the Emotet-infection-with-Gootkit.pcap capture file and write out the HTTP packets in it to the same file. This is not recommended, because if you're reading from a file and writing to the same file in a given TShark command, you will write to the file as you're reading from it. This may or may not work.
If you want to extract the HTTP packets to a file, I'd suggest writing to a different file, e.g.:
tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit-http-packets.pcap
So your first command does a live capture, writes the packets to a pcapng file, and writes the fields in question to a text file, and your second command overwrites the capture file you wrote in the first command.
The first command should product text - are you saying that the Emotet-infection-with-Gootkit.txt file is empty?

How to download single page in separate folders with wget

I want to download a single page of a website.
I can use Chrome Save as but all assets will be saved in one directory (website_files).
I used Webhttrack but it's not work on https
How can I save files in folders as page structure with wget? If you know other tools, please tell me.
Eg: example.com/js/js.js should be stored in js folder
Thanks to #DaFois, I solved my problem. Here is the command I used:
wget -E -H -k -K -N -p -P somename http://example.com/something
Reference:
https://gist.github.com/dannguyen/03a10e850656577cfb57
http://www.veen.com/jeff/archives/000573.html

Ffmpeg gets aborted in an Electron sandboxed application

I have an Electron app, published on the Mac AppStore, and sandboxed.
I'm trying to add a new feature that will encode/decode videos on the fly so I can stream more video formats in an Electron context.
I'm using fluent-ffmpeg and a static exec of ffmpeg.
Everything works awesomely, I've uploaded the sandboxed app to Apple, and got rejected because ffmpeg is using by default a secure transport protocol which is using non-public API, this is what they've sent me with the rejection:
Your app uses or references the following non-public API(s):
'/System/Library/Frameworks/Security.framework/Versions/A/Security'
: SecIdentityCreate
Alright, after much investigation, it appears that I have to compile ffmpeg myself with a --disable-securetransport flag. Easy enough, I do it using the same config as the static build I've downloaded simply adding the new flag.
I manage to install every dependencies needed, except libxavs, no big deal I guess and simply remove its flag from the configure command:
./configure \
--cc=/usr/bin/clang \
--prefix=/opt/ffmpeg \
--extra-version=tessus \
--enable-avisynth \
--enable-fontconfig \
--enable-gpl \
--enable-libass \
--enable-libbluray \
--enable-libfreetype \
--enable-libgsm \
--enable-libmodplug \
--enable-libmp3lame \
--enable-libopencore-amrnb \
--enable-libopencore-amrwb \
--enable-libopus \
--enable-libsnappy \
--enable-libsoxr \
--enable-libspeex \
--enable-libtheora \
--enable-libvidstab \
--enable-libvo-amrwbenc \
--enable-libvorbis \
--enable-libvpx \
--enable-libwavpack \
--enable-libx264 \
--enable-libx265 \
--enable-libxvid \
--enable-libzmq \
--enable-libzvbi \
--enable-version3 \
--pkg-config-flags=--static \
--disable-securetransport \
--disable-ffplay
With the new ffmpeg exec, everything still works as expected. But once I'm packaging, signing and sandboxing the app, ffmpeg stops working as soon as I try to launch it throwing this error:
An error occurred ffmpeg was killed with signal SIGABRT Error: ffmpeg was killed with signal SIGABRT
at ChildProcess.eval (webpack:///../node_modules/fluent-ffmpeg/lib/processor.js?:180:22)
at emitTwo (events.js:125:13)
at ChildProcess.emit (events.js:213:7)
at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
I've tried to remove the --disable-securetransport flag, see if it could have messed with something, same result.
I've tried to compile on a Linux machine, just to see if it could help, same thing.
As soon as I'm using my custom compiled exec it doesn't work in the sandbox, but when using the static one, everything is ok (after I xattr it, because it's quarantined and blocked in sandbox).
The only thing I've noticed that seems odd is that my custom compilation is only 20mo or so, when the static install I've downloaded is 43mo.
I'm really stuck with this.
So I finally was able to compile my static ffmpeg executable.
I've found my solution thanks to this answer.
Apparently, OSX has dynamic libraries located in /usr/local/bin which take precedence over everything else. So even if you try to compile your ffmpeg to be static, it won't work with these libraries on the way.
Once I've removed all those /usr/local/bin/*.dylib my build became fully static and worked perfectly in the sandbox.

Track generated file sizes on github using Travis-CI

I'd like to track the size of my minified JavaScript bundle as it's affected by pull requests on GitHub:
I'd like to see the size changes for a generated file in a GitHub status on each commit that Travis-CI builds. This would be similar to how coveralls.io and other tools track code coverage as it changes.
How can I do this? Are there existing tools? Is it simple to write my own?
GitHub provides a simple API for posting statuses on commits.
By putting a GitHub OAuth token in a Travis-CI environment variable, you can run a curl command to post the status:
filesize=$(wc -c < path-to-script.min.js | sed 's/ //g')
curl -H "Authorization: token $GITHUB_TOKEN" \
--data '{"context": "file size", "description": "'$filesize' bytes", "state": "success"}' \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/$TRAVIS_COMMIT
Calculating the change in file size resulting from a pull request is trickier. I wound up creating a Python script to do this, which you can find in the travis-weigh-in repo. With it, you'd just run this in your Travis build:
python weigh_in.py path-to-script.min.js
And it will produce commit statuses like the one in the question's screenshot.

Categories