I'd like to track the size of my minified JavaScript bundle as it's affected by pull requests on GitHub:
I'd like to see the size changes for a generated file in a GitHub status on each commit that Travis-CI builds. This would be similar to how coveralls.io and other tools track code coverage as it changes.
How can I do this? Are there existing tools? Is it simple to write my own?
GitHub provides a simple API for posting statuses on commits.
By putting a GitHub OAuth token in a Travis-CI environment variable, you can run a curl command to post the status:
filesize=$(wc -c < path-to-script.min.js | sed 's/ //g')
curl -H "Authorization: token $GITHUB_TOKEN" \
--data '{"context": "file size", "description": "'$filesize' bytes", "state": "success"}' \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/$TRAVIS_COMMIT
Calculating the change in file size resulting from a pull request is trickier. I wound up creating a Python script to do this, which you can find in the travis-weigh-in repo. With it, you'd just run this in your Travis build:
python weigh_in.py path-to-script.min.js
And it will produce commit statuses like the one in the question's screenshot.
Related
I would like to use C++ and libcurl to retrieve files from my Meijer account page. I would like to be here
https://www.meijer.com/mperks/ShoppingTools#receipts.
The login form lives here
https://accounts.meijer.com/manage/Account#/form/user
but exists on multiple pages. In addition, any curl request I make to the above link, such as the following
curl -A "Mozilla/5.0" -L -e ";auto" -c cookie.txt https://accounts.meijer.com/manage/Account#/form/user
returns a page with a warning about the browser not supporting javascript.
Is what I want feasible given C++ and curl?
I am following an online tutorial about creating a javascript-based server/client environment, and to test a POST method, the author gave a block of cURL code to run. When I try to run it, however, I get errors.
I have done some research, and I'm fairly sure that the provided code is for a Linux environment, but I'm operating on Windows 10. I tried changing the \ characters to ^ but I'm still getting errors. I have used both the cmd prompt and PowerShell.
curl -X POST \
http://localhost:3000/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test5#test.com",
"password": "1234",
"name": "test5"
}'
I expected the code to post data to my database, but instead I get the following output:
C:\Users\ricks>curl -X POST \
curl: (6) Could not resolve host: \
C:\Users\ricks> http://localhost:3000/signup \
'http:' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks> -H 'Content-Type: application/json' \
'-H' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks> -d '{
'-d' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"email": "test5#test.com",
'"email":' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"password": "1234",
'"password":' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\ricks>"name": "test5"
'"name":' is not recognized as an internal or external command,
operable program or batch file.
While all of the answers provided would undoubtedly lead to me being able to run the cURL code I posted above, I found a workaround using only the Windows cmd prompt.
First, I compiled the code in a single line rather than multiple. Second, I discovered that the compile errors came primarily from un-escaped " characters. In the end, the following code worked correctly and POSTed data to my database!
curl -X POST http://localhost:3000/signup -H "Content-Type:application/json" -d "{\"email\" : \"test4#test.com\", \"password\" : \"1234\", \"name\": \"test5\" }"
While this is likely not the most sustainable approach, it was a good learning moment for me and it might help those looking to utilize a one-time cURL execution without downloading anything extra to their system.
first download & install Cygwin, and make sure to install the gcc-core and g++ and make and autoconf and automake and git and libtool and m4 packages during package selection (there may be more required, if you get a compile-error, i can probably deduce the missing package by your complication error, let me know.),
and i guess you want your curl installation to support TLS/HTTPS? (almost all websites require httpS these days) in which case, you gotta start by compiling openssl,
(replace 1_1_1c with the latest openssl stable release... it is 1.1.1c as of writing.)
git clone -b 'OpenSSL_1_1_1c' --single-branch --depth 1 https://github.com/openssl/openssl
cd openssl
./config no-shared
make -j $(nproc)
(the last step may take a while) but openSSL's build script does not create a lib folder, but curl's build script expect the lib files to be in a lib folder inside the openssl folder, so after that, run
mkdir lib
cp *.a lib;
after openssl is compiled, cd .. out of there, it's time to make curl,
(replace 7_65_3 with whatever the latest stable curl release is. as of writing, it is 7.65.3)
git clone -b 'curl-7_65_3' --single-branch --depth 1 https://github.com/curl/curl.git
cd curl
./buildconf
LDFLAGS="-static" ./configure --with-ssl=$(realpath ../openssl) --disable-shared --enable-static
make -j $(nproc)
(if you wonder why i used realpath: there appears to be a bug in curl's buildscript that makes it fail if you supply a relative path, so an absolute path is required, it seems. if you wonder why i made a static build aka --disable-shared --enable-static, you may have a different libopenssl library in your $PATH, so to avoid a variation of DLL Hell, a static build is safer.)
and finally you have your own curl build in the relative path ./src/curl after make has finished, which you can run with:
./src/curl -X POST \
http://localhost:3000/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test5#test.com",
"password": "1234",
"name": "test5"
}'
in a cygwin terminal. (which has the same syntax as linux terminals)
........... alternatively, you can just install the curl package from cygwin, and use cygwin's pre-compiled curl, but since you asked specifically how to COMPILE curl, there's your answer. i recommend you avoid the hassle and just install cygwin's pre-compiled curl.
I tried changing the \ characters to ^ but I'm still getting errors
This is not even the start of the problems in migrating the code. There are compiled versions available on application's home site for Windows 32 and 64 bit with and without Cygwin already available.
I have a npm-based project and I want to introduce swagger-based REST API client into it. My idea is to have API description yaml file and generate client on build step.
Is there any well knows approaches to do it? I found only swagger-js-codegen but I don't clearly understand how to integrate it into building process.
Given that you've your REST API documented in Swagger/OpenAPI spec, you can then simply use curl (or other http tools) to send an HTTP request to generate API clients as part of your build process. An example of the curl request to generate ruby client for http://petstore.swagger.io/v2/swagger.json is as follows:
curl -X POST -H "content-type:application/json" -d '{"swaggerUrl":"http://petstore.swagger.io/v2/swagger.json"}' https://generator.swagger.io/api/gen/clients/ruby
Please refer to https://github.com/swagger-api/swagger-codegen#online-generators for more info.
UPDATE: On May 2018, about 50 top contributors and template creators of Swagger Codegen decided to fork Swagger Codegen to maintain a community-driven version called OpenAPI Generator. Please refer to the Q&A for more information.
I am trying with a trial version of Splunk cloud. I created the HTTP Event Collector. Now I am trying to log into Splunk using the curl script available here http://dev.splunk.com/view/event-collector/SP-CAAAE7F. But I guess I am doing something wrong, as I am not able to hit the server.
What has to be the host name of Splunk that I have to use to save the logs?
This is my Splunk cloud instance https://xxxxx.cloud.splunk.com
I tried something like this, I guess which is wrong (replaced with tokenid which I got after creating the HTTP EC)
curl -k https://xxxxx.cloud.splunk.com/services/collector -H 'Authorization: Splunk tokenid' -d '{"event":"Hello, World!"}'
Please help.
Thanks
I too tried the same using curl and got the error as below:-
curl: (7) Failed to connect to xxx.cloud.splunk.com port 8088: Timed out
Later noticed that, in the doc mentioned above, there is a "notes" section which says to prefix "input-" to the host name for self service cloud instances.
Having this change in place, the curl request worked. Data also appeared in the Splunk dashboard. See the curl output.
$ curl -k https://input-xxx.cloud.splunk.com:8088/services/collector -H "Authorization: Splunk 00301XX3-1234-12XX-X1XX-1234X0X1XXX0" -d '{"event":"Breakfast Order"} {"event":{"coffee":"double cream double sugar","muffin":"blueberry","juice":"none"}}'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 139 100 27 100 112 10 43 0:00:02 0:00:02 --:--:-- 74{"text":"Success","code":0}
Also, your code will work if the request is sent to local Splunk server installation. Then the curl request can be sent to localhost:8088/services/collector with out input- prefix
If you already added input- prefix as part of host name and still get this error, it might be something else. Please see if this link helps.
I have a github.com demo page that is linking to https://raw.github.com/.../master/.../file.js so that I don't need to always copy the .js file over to the gh-pages branch every time it's changed. This works in every browser except IE which complains:
SEC7112: Script from https://raw.github.com/cwolves/jQuery-iMask/master/dist/jquery-imask-min.js was blocked due to mime type mismatch
This complaint is coming from the fact that the file is transferred with:
X-Content-Type-Options: nosniff
Content-Type: text/plain
which I can't change.
Anyone have any ideas how to accomplish this same thing? Somehow allowing me to link to the file in the master branch without having to always push it to the gh-pages branch?
Actual page: http://cwolves.github.com/jQuery-iMask/
(Minor update -- I changed the gh-pages in this exact instance to include the .js file, so IE is no longer broken, but would still like any feedback :))
You can try using https://rawgit.com/ service.
Just replace raw.github.com with rawgit.com
UPDATE
The Rawgit service (former Rawgithub) has been shutdown.
RawGit has reached the end of its useful life
October 8, 2018
GitHub repositories that served content through RawGit within the last month will continue to be served until at least October of 2019. URLs for other repositories are no longer being served.
If you're currently using RawGit, please stop using it as soon as you can.
I can't help you with tricking IE, and I think from that angle what you are looking for is impossible (and discouraged, since that is not the purpose of Github's raw URLs).
However, you can automate committing the changes to gh-pages and pushing to make your life easier. You can do it with a post-commit hook to update the relevant files in the gh-pages branch automatically. I've cooked up such a post-commit script that watches for changes to certain files and commits them to another branch:
#!/bin/sh
WATCH_BRANCH="master"
WATCH_FILES="jquery-imask-min.js"
DEST_BRANCH="gh-pages"
# bail out if this commit wasn't made in the watched branch
THIS_BRANCH=$(git branch --no-color | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1/');
if [ "$THIS_BRANCH" != "$WATCH_BRANCH" ]; then
exit 0
fi
# only update if watched files have changed in the latest commit
CHANGED_FILES=$(git show --pretty="format:" --name-only $WATCH_BRANCH)
if $(echo "$CHANGED_FILES" | grep "^$WATCH_FILES$" -q); then
# checkout destination branch, then
# checkout latest version of each watched file and add to index
git checkout -q $DEST_BRANCH
git pull -q
SAVEIFS=$IFS
IFS=$(echo -n "|")
for file in $WATCH_FILES; do
git checkout $WATCH_BRANCH -- $file
git add $file > /dev/null
done
IFS=$SAVEIFS
# commit with a chance to edit the message, then go back to watched branch
LATEST_COMMIT=$(git rev-parse $WATCH_BRANCH)
git commit -m "Also including changes from $WATCH_BRANCH's $LATEST_COMMIT"
git push origin $DEST_BRANCH
git checkout -q $WATCH_BRANCH
fi
Note that this is a general script, though I have specified the config vars at the top for your purposes. $WATCH_FILES can be set to a list of files delimited by braces | such as index.html|js/jquery.js. Paths must be specified relative to the root of the repo.
Let me know if you have any questions, and if the script helps you!
Take a look at raw.githack.com. The idea of this service is inspired from rawgit.com. I just realized that using a whole framework (node.js + express.js) for such simple thing as requests proxying is overkilling, and made same stuff using nginx only.
Replace "githubusercontent" domain name chunk in your github/gist URL with "githack" and you're done!
Furthermore, it supports bitbucket.com - simply replace whole bitbucket domain with bb.githack.com.
Github's raw URLs aren't designed to be a generic web host. Push that stuff off to proper host, like say pages.github.com.
Nowadays theres jsDelivr, its open source, free and fast.
And supports GitHub! https://www.jsdelivr.com/
Furthermore its from trusted people/company.
I say this because I'm not sure we can trust https://raw.githack.com/
I am also trying to achieve this. However, I cannot seem to get the solution from #shelhamer to work in my codebase. Below is the updated post-commit hook script that I used to get it working:
#!/bin/sh
WATCH_BRANCH="master"
WATCH_FILES="jquery-imask-min.js"
DEST_BRANCH="gh-pages"
# bail out if this commit wasn't made in the watched branch
THIS_BRANCH=$(git branch --no-color | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1/');
if [ "$THIS_BRANCH" != "$WATCH_BRANCH" ]; then
exit 0
fi
# only update if watched files have changed in the latest commit
CHANGED_FILES=$(git show --pretty="format:" --name-only $WATCH_BRANCH)
if $(echo "$CHANGED_FILES" | grep "^$WATCH_FILES$" -q); then
# checkout destination branch, then
# checkout latest version of each watched file and add to index
git checkout -q $DEST_BRANCH
git pull -q
SAVEIFS=$IFS
IFS=$(echo -n "|")
for file in $WATCH_FILES; do
git checkout $WATCH_BRANCH -- $file
git add $file > /dev/null
done
IFS=$SAVEIFS
# commit with a chance to edit the message, then go back to watched branch
LATEST_COMMIT=$(git rev-parse $WATCH_BRANCH)
git commit -m "Also including changes from $WATCH_BRANCH's $LATEST_COMMIT"
git push origin $DEST_BRANCH
git checkout -q $WATCH_BRANCH
fi
I had to update the use of grep to make the regex successfully match (-P was not an option on the grep implementation included in Git Bash shell for Windows), add a git pull and a git push origin $DEST_BRANCH. Oh, and I had to add an empty shell of the directories and files in advance (perhaps just the directories would have sufficed?).
Since this is actually doing a push, I think it may be better to switch this script to being a post-receive script instead of post-commit. Otherwise, you could be pushing code to gh-pages that never made it into master [if you choose not to push it].