I used the below code to convert a pcap file to text file with the given columns. The output text file. Code doesn't give any error and it gives output but it gives it empty without any data. can you help please to find the error
from scapy.all import *
data = "Emotet-infection-with-Gootkit.pcap"
a = rdpcap(data)
os.system("tshark -T fields -e _ws.col.Info -e http -e frame.time -e "
"data.data -w Emotet-infection-with-Gootkit.pcap > Emotet-infection-with-Gootkit.txt -c 1000")
os.system("tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit.pcap")
sessions = a.sessions()
i = 1
for session in sessions:
http_payload = ""
tshark -F k12text -r a.pcap -w a.txt.
"K12 text format" is a text packet capture format; it's what some Tektronix equipment can write out - in that sense, it's similar to writing out the raw hex data, plus some metadata. However, from a user-interface sense, it's more like "Save As..." in Wireshark, because it's a capture file format.
I guess this should work for you
The two tshark commands you're running are:
tshark -T fields -e _ws.col.Info -e http -e frame.time -e data.data -w Emotet-infection-with-Gootkit.pcap > Emotet-infection-with-Gootkit.txt -c 1000
That command will do a live capture from the default interface, write 1000 captured packets to a capture file named Emotet-infection-with-Gootkit.pcap (probably pcapng, not pcap, as pcapng has been Wireshark's default capture format for many years now), and write the Info column, the word "http" for HTTP packets and nothing for non-HTTP packets, the time stamp, and any otherwise not dissected data out to Emotet-infection-with-Gootkit.txt as text.
tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit.pcap
That will read the Emotet-infection-with-Gootkit.pcap capture file and write out the HTTP packets in it to the same file. This is not recommended, because if you're reading from a file and writing to the same file in a given TShark command, you will write to the file as you're reading from it. This may or may not work.
If you want to extract the HTTP packets to a file, I'd suggest writing to a different file, e.g.:
tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit-http-packets.pcap
So your first command does a live capture, writes the packets to a pcapng file, and writes the fields in question to a text file, and your second command overwrites the capture file you wrote in the first command.
The first command should product text - are you saying that the Emotet-infection-with-Gootkit.txt file is empty?
Related
I would like the console output of my web application to a centralized logging system. I am very flexible on the recipient side (syslog|whatever server).
There are plenty of packages to send messages to servers (and a quick solution would be to fetch to an ELK stack for instance).
This is not what I am looking for: I would like the console errors/warnings to be forwarded (and in addition the application-generated ones, but that part is easy).
Is there a consensual solution for this problem?
More of a POC than a solution perhaps but Remote Debuging Protocol can be used. With Firefox
1- Start a debugging server, the console of this instance will be captured. Pages to debug should be opened here. Dev tools must be opened.
kdesu -u lmc -c "firefox --start-debugger-server 9123"
2- Start the debug Firefox client by opening about:debugging and connecting to the debug server. This instance might work a receiver for the packets only.
3- Use tshark to capture packets. Output can be sent to the logging facility.
tshark -i lo -f 'tcp port 9123' -Y 'data.data contains "pageError"' -T ek -e tcp.payload | jq -r '.layers.tcp_payload[0]' - | head -n2 | xxd -r -p
Capturing on 'Loopback: lo'
Output
These json object might be sent to logging facility.
3 682:{"type":"pageError","pageError":{"errorMessage":"This page uses the non standard property “zoom”. Consider using calc() in the relevant property values, or using “transform” along with “transform-origin: 0 0”.","errorMessageName":"","sourceName":"https://www.acordesdcanciones.com/2020/12/la-abandone-y-no-sabia-chino-hidalgo.html","sourceId":null,"lineText":"","lineNumber":0,"columnNumber":0,"category":"Layout","innerWindowID":10737418988,"timeStamp":1629339439629,"warning":true,"error":false,"info":false,"private":false,"stacktrace":null,"notes":null,"chromeContext":false,"cssSelectors":"","isPromiseRejection":false},"from":"server1.conn1.child83/consoleActor2"}
Another tshark one-liner
tshark -i lo -f 'tcp port 9123' -Y 'data.data contains "pageError"' -T fields -e data.data | tail -n +6 | head -n3 | xxd -r -p >> data.json
Having this bash_profile commands, trying to run backend and frontend server in one single alias command i.e ins
alias is='ivui && npm run start:backend'
alias ib='ivbe && npm run start:dev'
alias ins='ib && is'
is referring to a different project folder and it starts the server and ib is also referring to a different folder and server. Trying to concat both, but the first one only triggers and concat of && is not executed.
Concat npm helps in combining both servers from single project folder, but wondering how we can get this done using bash_profile? so that by executing just ins, it must start backend server first and frontend also.
The reason your double alias command (ib && is) does not work is because, as per man bash:
Aliases allow a string to be substituted for a word when it is used as
the first word of a simple command.
Stands to reason that if you run ib && is where both ib and is are aliases, it will only run ib.
With that out of the way, there is have a different and, I believe, better solution to your problem. You can use screen to run those 2 commands in the background and also, as a bonus, have the ability to watch their terminal output any time you want.
The idea is this: 1. Start a screen session; 2. Open two windows inside that session; 3. Run npm run start:backend in 1st window and run npm run start:dev in the second window.
Here is what you need:
screen -S Servers -t backend_window -A -d -m
screen -S Servers -X screen -t dev_window
This will start a screen session with backend_window and dev_window inside it. Now you just need to run your 2 commands inside those windows:
screen -S Servers -p backend_window -X stuff $'npm run start:backend\n'
screen -S Servers -p dev_window -X stuff $'npm run start:dev\n'
Now both your npm commands are running in the background at the same time. You can just put these 4 lines into your .bashrc file and it will launch on log in.
But the best part about this approach is you can visually see what's going on with those npm commands by attaching the screen and looking inside those windows. Just run:
screen -rx Servers
This will show you your first window by default. Let's split the screen and show both windows side by side:
Ctr-A + | <- will split the screen into 2 sections
Ctr-A + Tab <- will shit cursor to the new section
Ctr-A + " <- will show you your 2 windows, just pick the dev
All this will give you this view
Keep in mind that both your npm commands will continue to run even after you log out. To kill them, either attach the screen as described above and then Ctrl-C both processes. Or just run killall screen and they will just die.
You could create a function and add your aliases inside, for example,
stopdev () (
cd "$HOME/website" && {
make website_stop
ret=$?
make backend_stop && return $ret
}
)
This example also has a return code, and has a subshell called to run the function, so the filepath is not $HOME/website after the function is called.
More information on the following webpage,
https://unix.stackexchange.com/questions/266063/why-cant-i-call-two-aliases-with
I am following an online tutorial about creating a javascript-based server/client environment, and to test a POST method, the author gave a block of cURL code to run. When I try to run it, however, I get errors.
I have done some research, and I'm fairly sure that the provided code is for a Linux environment, but I'm operating on Windows 10. I tried changing the \ characters to ^ but I'm still getting errors. I have used both the cmd prompt and PowerShell.
curl -X POST \
http://localhost:3000/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test5#test.com",
"password": "1234",
"name": "test5"
}'
I expected the code to post data to my database, but instead I get the following output:
C:\Users\ricks>curl -X POST \
curl: (6) Could not resolve host: \
C:\Users\ricks> http://localhost:3000/signup \
'http:' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks> -H 'Content-Type: application/json' \
'-H' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks> -d '{
'-d' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"email": "test5#test.com",
'"email":' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"password": "1234",
'"password":' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"name": "test5"
'"name":' is not recognized as an internal or external command,
operable program or batch file.
While all of the answers provided would undoubtedly lead to me being able to run the cURL code I posted above, I found a workaround using only the Windows cmd prompt.
First, I compiled the code in a single line rather than multiple. Second, I discovered that the compile errors came primarily from un-escaped " characters. In the end, the following code worked correctly and POSTed data to my database!
curl -X POST http://localhost:3000/signup -H "Content-Type:application/json" -d "{\"email\" : \"test4#test.com\", \"password\" : \"1234\", \"name\": \"test5\" }"
While this is likely not the most sustainable approach, it was a good learning moment for me and it might help those looking to utilize a one-time cURL execution without downloading anything extra to their system.
first download & install Cygwin, and make sure to install the gcc-core and g++ and make and autoconf and automake and git and libtool and m4 packages during package selection (there may be more required, if you get a compile-error, i can probably deduce the missing package by your complication error, let me know.),
and i guess you want your curl installation to support TLS/HTTPS? (almost all websites require httpS these days) in which case, you gotta start by compiling openssl,
(replace 1_1_1c with the latest openssl stable release... it is 1.1.1c as of writing.)
git clone -b 'OpenSSL_1_1_1c' --single-branch --depth 1 https://github.com/openssl/openssl
cd openssl
./config no-shared
make -j $(nproc)
(the last step may take a while) but openSSL's build script does not create a lib folder, but curl's build script expect the lib files to be in a lib folder inside the openssl folder, so after that, run
mkdir lib
cp *.a lib;
after openssl is compiled, cd .. out of there, it's time to make curl,
(replace 7_65_3 with whatever the latest stable curl release is. as of writing, it is 7.65.3)
git clone -b 'curl-7_65_3' --single-branch --depth 1 https://github.com/curl/curl.git
cd curl
./buildconf
LDFLAGS="-static" ./configure --with-ssl=$(realpath ../openssl) --disable-shared --enable-static
make -j $(nproc)
(if you wonder why i used realpath: there appears to be a bug in curl's buildscript that makes it fail if you supply a relative path, so an absolute path is required, it seems. if you wonder why i made a static build aka --disable-shared --enable-static, you may have a different libopenssl library in your $PATH, so to avoid a variation of DLL Hell, a static build is safer.)
and finally you have your own curl build in the relative path ./src/curl after make has finished, which you can run with:
./src/curl -X POST \
http://localhost:3000/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test5#test.com",
"password": "1234",
"name": "test5"
}'
in a cygwin terminal. (which has the same syntax as linux terminals)
........... alternatively, you can just install the curl package from cygwin, and use cygwin's pre-compiled curl, but since you asked specifically how to COMPILE curl, there's your answer. i recommend you avoid the hassle and just install cygwin's pre-compiled curl.
I tried changing the \ characters to ^ but I'm still getting errors
This is not even the start of the problems in migrating the code. There are compiled versions available on application's home site for Windows 32 and 64 bit with and without Cygwin already available.
When I try to run my JMeter WebDriver sampler script via command prompt, the error below shows up. Is there any solution for this?
F:\apache-jmeter-3.2\bin\TestScriptRecorder.jmx is not a valid Win32 application.
It seems you are trying to execute .jmx file directly, it won't work this way, you need to launch jmeter.bat file and pass the .jmx file via -t command-line argument like:
F:\apache-jmeter-3.2\bin\jmeter.bat -n -t F:\apache-jmeter-3.2\bin\TestScriptRecorder.jmx -l result.jtl
or
java -jar F:\apache-jmeter-3.2\bin\ApacheJMeter.jar -n -t F:\apache-jmeter-3.2\bin\ -l result.jtl
References:
Non-GUI Mode (Command Line mode)
Full list of command-line options
How Do I Run JMeter in Non-GUI Mode?
If you want run your Jmx through command line follow the below steps.
Open command prompt
go to the path of where your jmeter.bat or jmete.sh file is placed(ex: C:\Users\ABC\apache-jmeter-3.3\bin)
Write the command as below
For Windows: jmeter -n -t C:\your_testScript_path\yourscript.jmx
For ubuntu/linux: ./jmeter.sh -n -t C:\your_testScript_path\yourscript.jmx
you can aslo pass any values to the jmx files using command line using command line paremeters like -JUsers=10 where in it should be defined in jmx like ${__P(Users,1)}
I'm trying to run a javascript app on localhost:8000 using docker. Part of what I would like to do is swap out some config files based on the docker run command, I'd like to pass an environment variable into the container so that the bash script can use that as a parameter.
What my dockerfile is looking like is this:
FROM nginx
COPY . /usr/share/nginx/html
CMD ["bash","/usr/share/nginx/html/runfile.sh"]
And the bash script looks like this:
#!/bin/bash
if [ "$SECURITY_VERSION" = "OPENAM" ]; then
sed -i -e 's/localhost/openam/g' authConfig.js
fi
docker run -p 8000:80 missioncontrol:latest -e SECURITY_VERSION="TEST"
Docker gives me an exception saying -e exec command not found.
However if I change the dockerfile to use ENTRYPOINT instead of CMD, the -e flag works but the webserver does not start up.
Is there something I'm missing here? Is the ENTRYPOINT being overriden or something?
EDIT:
So I've updated my dockerfile to use ENTRYPOINT ["bash","/usr/share/nginx/html/runfile.sh", ";", " nginx -g daemon off;"]
But the docker container still shuts down. Is there something I'm missing?
NGINX 1.19 has a folder /docker-entrypoint.d on the root where place startup scripts executed by thedocker-entrypoint.sh script. You can also read the execution on the log.
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will
attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in
/docker-entrypoint.d/
/docker-entrypoint.sh: Launching
[..........]
/docker-entrypoint.sh: Configuration complete; ready for start up
For my future self and everybody else, this is how you can set up variable substitution at startup (for nginx, may also work for other images):
I've also wrote a more in depth blog post about it: https://danielhabenicht.github.io/docker/angular/2019/02/06/angular-nginx-runtime-variables.html
Dockerfile:
FROM nginx
ENV TEST="Hello variable"
WORKDIR /etc/nginx
COPY ./substituteEnv.sh ./substituteEnv.sh
# Execute the subsitution script and pass the path of the file to replace
ENTRYPOINT ["./substituteEnv.sh", "/usr/share/nginx/html/index.html"]
CMD ["nginx", "-g", "daemon off;"]
subsitute.sh: (same as #Daniel West's answer)
#!/bin/bash
if [[ -z $1 ]]; then
echo 'ERROR: No target file given.'
exit 1
fi
#Substitute all environment variables defined in the file given as argument
envsubst '\$TEST \$UPSTREAM_CONTAINER \$UPSTREAM_PORT' < $1 > $1
# Execute all other paramters
exec "${#:2}"
Now you can run docker run -e TEST="set at command line" -it <image_name>
The catch was the WORKDIR, without it the nginx command wouldn't be executed. If you want to apply this to other containers be sure to set the WORKDIR accordingly.
If you want to do the substitution recursivly in multiple files this is the bash script you are looking for:
# Substitutes all given environment variables
variables=( TEST )
if [[ -z $1 ]]; then
echo 'ERROR: No target file or directory given.'
exit 1
fi
for i in "${variables[#]}"
do
if [[ -z ${!i} ]]; then
echo 'ERROR: Variable "'$i'" not defined.'
exit 1
fi
echo $i ${!i} $1
# Variables to be replaced should have the format: ${TEST}
grep -rl $i $1 | xargs sed -i "s/\${$i}/${!i}/Ig"
done
exec "${#:2}"
I know this is late but I found this thread while searching for a solution so thought I'd share.
I had the same issue. Your ENTRYPOINT script should also include exec "$#"
#!/bin/sh
set -e
envsubst '\$CORS_HOST \$UPSTREAM_CONTAINER \$UPSTREAM_PORT' < /srv/api/default.conf > /etc/nginx/conf.d/default.conf
exec "$#"
That will mean the startup CMD from the nginx:alpine container will run. The above script will inject the specified environment variables into a config file. By doing this in runtime yo can override the environment variables.
Update the CMD line as below in the your dockerfile. Please note that if runfile.sh does not succeed (exit 0; inside it) then the next nginx command will not be executed.
FROM nginx
COPY . /usr/share/nginx/html
CMD /usr/share/nginx/html/runfile.sh && nginx -g 'daemon off;'
nginx docker file is using a CMD commnd to start the server on the base image you use. When you use the CMD command in your dockerfile you overwrite the one in their image. As it is mentioned in the dockerfile documentation:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
NginX image has docker-entrypoint.d included and on container start will look for any scripts located in there. You can add your custom scripts during docker build. I also found that if you are using alpine image, bash is not installed, so you can add it yourself by running:
RUN apk update
RUN apk upgrade
RUN apk add bash
sample DockerFile:
FROM nginx:alpine
EXPOSE 443
EXPOSE 80
RUN apk update
RUN apk upgrade
RUN apk add bash
COPY ["my-script.sh", "/docker-entrypoint.d/my-script.sh"]
RUN chown nginx:nginx /docker-entrypoint.d/my-script.sh
USER nginx
In order to limit scope execution of your custom script script, it's highly recommended to run your container as a non-privileged user.
nginx container already defines ENTRYPOINT. If you define also CMD it will combine them both like 'ENTRYPOINT CMD' in such way that CMD becomes argument of ENTRYPOINT. That is why you need to redefine ENTRYPOINT to get it working.
Usually ENTRYPOINT is defined in such way, that if you also pass CMD, it will be executed by ENTRYPOINT script. However this might not be case with every container.