How to update the local path with a commit version? - javascript

I'm trying to clone a repo to the local file system and then checkout a specific commit.
This is what I have:
Git.Clone(GIT_REPO_URL, localPath, CLONE_OPTIONS).then((repo) => {
return repo.getCommit(version).then((commit) => {
// use the local tree
});
}).catch((error) => {
// handler clone failure
});
This clones the repo just fine, but the local version I end up with is the current head of the master and not the commit I checked out (version).
How do I update the local tree to match this commit?
Thanks.

getCommit does not modify the local tree; it merely retrieves the commit in memory in the script, so that you can work with it without modifying the local tree. (This would, for example, be useful if you wanted to walk through the git history and do some analysis, but didn't need to actually update the local tree to access the files themselves.)
To actually check out a specific branch or commit, you want to use the checkoutBranch and checkoutRef functions, which actually do update the local working tree. (Notice how their descriptions explicitly say so, unlike getCommit, which doesn't say it modifies the working three because it doesn't actally do that).

If you want to checkout a random commit, you need to use the Checkout.tree(*) function instead.
var repository;
NodeGit.Clone(GIT_REPO_URL, localPath, CLONE_OPTIONS)
.then(function(repo) {
repository = repo;
return repo.getCommit(version);
})
.then(function(commit) {
return NodeGit.Checkout.tree(repository, commit, checkoutStrategy:
git.Checkout.STRATEGY.FORCE
});
}).catch((error) => {
// handler clone failure
});
You may or may not also want to detach your HEAD.
Edit:
This will checkout and set HEAD to the commit in question.
var repository;
NodeGit.Clone(GIT_REPO_URL, localPath, CLONE_OPTIONS)
.then(function(repo) {
repository = repo;
return repo.getCommit(version);
})
.then(function(commit) {
repository.setHeadDetached(commit.id());
}).catch((error) => {
// handler clone failure
});

I haven't managed to make it work, and as it turned out to be way too complicated to get something as trivial to work, I've decided to just do this:
import * as process from "child_process";
const command = `git clone ${ GIT_REPO_URL } ${ repoPath } && cd ${ repoPath } && git checkout ${ version }`;
process.exec(command, error => {
...
});

Related

how to stop interruption for all tests on first failure in cypress

I have a task to go to multiple pages and generate a lighthouse report for each and every page. so I implemented generating the lighthouse report part and also I created an excel file and add all the web page's URLs. It means now I implemented reading the excel file and getting URLs one by one and going to that page and doing the lighthouse test.
//calculationStoryService
export function getLighthouseAuditForCalculationStory() {
getAllCalculationStoriesFromExcel().then(stories => {
stories.forEach(story => {
cy.visit(story.url);
cy.wait(Cypress.env("storyTableViewResultColumnCheckLoadingTimeout"));
lighthouseAudit(story.name); // generating lighthouse report
});
});
}
I am generating lighthouse reports by using a package called "cypress-audit" to generate the lighthouse report we need to give base value of single page
function lighthouseAudit(name) {
let baseValueData = getBaseValueData();
let desktopConfig= getDesktopConfig();
const lighthouseMetrics = {
performance: baseValueData.performance,
accessibility: baseValueData.accessibility,
seo: baseValueData.seo,
pwa: baseValueData.pwa
};
cy.lighthouse(lighthouseMetrics, desktopConfig);
}
when I am calling "lighthouseAudit" method I am passing "story.name" as a parameter.
it's a unique id for each and every page. so "lighthouseAudit" method knows how to get
"baseValueData" using that unique id.
when the "lighthouseAudit" method running, it doing a lighthouse test and compares the values with base values if some value is less than the base value it will fail the test
that is how it's working and it's working fine.
this is my test file and where I called the "getLighthouseAuditForCalculationStory" method
import { Given, Then } from "cypress-cucumber-preprocessor/steps";
import { signIn } from "../../../support/services/commonServices";
import { getLighthouseAuditForCalculationStory } from "../../../support/services/calculationStoryService";
Given('Logged in', () => {
signIn (Cypress.env('username'), Cypress.env('password'));
});
Then('navigate to page and generate lighthouse audit', () => {
getLighthouseAuditForCalculationStory();
});
as I explained "getLighthouseAuditForCalculationStory" gets all web page URLs iterating through the list and visiting the page. after it runs the lighthouse test. let's say as an example if the first-page lighthouse test failed, then the loop stopped from there. i want to continue the loop and visit all the pages and do the lighthouse test even though one failed. it interrupts to entire loop if one test fails.
how to continue the loop even though the lighthouse test fails for one web page
packages list
"#cypress-audit/lighthouse": "^1.3.1",
"cypress": "^10.9.0",
"cypress-audit": "^1.1.0",
"cypress-cucumber-preprocessor": "^4.3.1"
A couple of things to try:
Suppress the fail with an event handler.
export function getLighthouseAuditForCalculationStory() {
Cypress.on('fail', (error, runnable) => {
if (error.message.includes('A threshold has been crossed') {
return false
}
})
getAllCalculationStoriesFromExcel().then(stories => {
...
This may not behave as you want, an alternative is to try to create one test per story.
Spec
Cypress.env('stories').forEach(story => {
it(`Testing story: "${story.url}"`, () => {
cy.visit(story.url);
cy.wait(Cypress.env("storyTableViewResultColumnCheckLoadingTimeout"));
lighthouseAudit(story.name); // generating lighthouse report
})
})
The stories environment variable would have to be created in cypress.config.js using Before Spec API
const { defineConfig } = require('cypress')
module.exports = defineConfig({
e2e: {
setupNodeEvents(on, config) {
on('before:spec', (spec) => {
getAllCalculationStoriesFromExcel().then(stories => {
config.env['stories'] = stories
})
})
}
}
})
The exact details depend on getAllCalculationStoriesFromExcel(), I expect that code already runs in cypress.config.js but in a task.

Deep equal comparing not working on Cypress 9.7.0

I've updated my cypress to 9.7.0 version and right now I have a problem with deep equal. When I wrote test line of code:
expect([1,2,3]).to.deep.equal([1,2,3]);
Everything works correctly.
While I'm testing redux store I got an error which is looks
Timed out retrying after 4000ms: expected [ Array(2) ] to deeply equal [ Array(2) ]
Arrays in devtool console preview are the same... I've tried in two ways to write test. I also combined it with async and timeouts
First try:
it('Example redux check', () => {
cy.fixture('file.json').then((fixtures) => {
cy.window()
.its('store')
.invoke('getState')
.its('queue.queueItems').should('deep.equal', fixtures.store.queue.queueItems);
});
});
Second try
it('Example redux check', () => {
cy.fixture('file.json').then((fixtures) => {
const getQueueItems = (win) => {
return win.store.getState().queue.queueItems;
}
cy.window()
.pipe(getQueueItems)
.should('deep.equal', fixtures.store.queue.queueItems);
});
});
Had anyone similar issue or idea how to avoid that timeout? Exactly the same is happening while comparing async payloads...
I couldn't fault the deep.equal assertion in Cypress v9.7.0, even deeply nested arrays and objects - except when the order differed.
If your problem is difference in array order, try adding package deepEqualInAnyOrder
const deepEqualInAnyOrder = require('deep-equal-in-any-order');
chai.use(deepEqualInAnyOrder);
it('matches when ordering is different', () => {
const expected = [{a:{x:1}}, {b:2},{c:3}];
expect([{a:{x:1}}, {b:2}, {c:3}]).to.deep.equal(expected) // passes
expect([{b:2}, {a:{x:1}}, {c:3}]).to.deep.equal(expected) // fails
expect([{b:2}, {a:{x:1}}, {c:3}]).to.deep.equalInAnyOrder(expected) // passes
});
I also wanted to see if deep.equal on the <h1> element of http://example.com would succeed.
Here is my minimal, reproducible example.
// Cypress 9.7.0
it('passes deep-equal of two DOM objects', () => {
cy.visit('http://example.com')
cy.get('h1').then($el1 => { // get h1 1st time
cy.get('h1').then($el2 => { // get another copy
// Are they different objects?
expect($el1).to.not.equal($el2) // pass
expect($el1 === $el2).to.equal(false) // pass
// Do they deep-equal
expect($el1).to.deep.equal($el2) // pass
})
})
})
Nearly all frontends I use have Cypress including some open source ones.
It is related to a few issues opened on GitHub and affects latest versions of Cypress.
https://github.com/cypress-io/cypress/issues/21353
https://github.com/cypress-io/cypress/issues/21469
This is what I get on a WIP after upgrading some tests that use deep.equal:
Your first example looks more standard.
I recommend downgrading to a lower version. The following version worked for me before upgrade:
9.5.4

Firebase change location/directory of an existing record

I am experimenting with manipulating data in firebase and have run into the following question:
If i have 2 records inside of a subdirectory (A), and want to move them to subdirectory (B) how do i do this in javascript?
MainContainer
- A
- KEY 1 data
- KEY 2 data
- B
I am thinking copy the contents to a new record that saves to directory B and delete the directory A record, but does firebase provide an easier way to just move the record?
Firebase Realtime Database doesn't provide a move operation. You have to read the existing data, write it to the new location, then remove the data at the original location.
Referring to https://firebase.google.com/docs/database/web/read-and-write#update_specific_fields
You can update multiple nodes in a single operation. So after reading the initial data you want to manage you can set their fields to null to delete them and set the new ones to the target values
So I came across this solution it will copy the code in your old directory and recreate it to the new directory..
moveFbRecord: function(oldRef, newRef){
oldRef.once('value', function(snap) {
newRef.set( snap.val(), function(error) {
if( !error ) { }
else if( typeof(console) !== 'undefined' && console.error ) { console.error(error); }
});
});

Firebase functions reading/accessing remaining node information

I have create a trigger to monitor the path /Messages/{pushId}/originalText, so this should only trigger if a change occurs at that specific node and nowhere else.
What I would like to do is access the remaining Node data so for example how would I need the node date at the /Messages/{pushId}/followers which sits at the same level as originalText
Sample:
exports.makeUppercase = functions.database.ref('/Messages/{pushId}/originalText')
.onWrite(event => {
//how to access data at another node, for example
//important/Messages/{pushId}/followers
})
exports.makeUppercase = functions.database.ref('/Messages/{pushId}/originalText')
.onWrite(event => {
event.data.ref.parent.child('followers').once('value', data => {
console.log('Your data: ' + data)
})
})
I suggest you to take a look at the documentation it is really nice and you can find everything you want!
onWrite's documentations says that .onWrite returns a DeltaSnaphsot
DeltaSnaphot's doc says that DeltaSnaphot.ref() returns a Reference
Reference's documentation has every query method you need, in this case once

Firebase array item is removed and immediately auto-added back (with AngularFire)

I am trying to remove an item from $firebaseArray (boxes).
The remove funcion:
function remove(boxJson) {
return boxes.$remove(boxJson);
}
It works, however it is immediately added back:
This is the method that brings the array:
function getBoxes(screenIndex) {
var boxesRef = screens
.child("s-" + screenIndex)
.child("boxes");
return $firebaseArray(boxesRef);
}
I thought perhaps I'm holding multiple references to the firebaseArray and when one deletes, the other adds, but then I thought firebase should handle it, no?
Anyway I'm lost on this, any idea?
UPDATE
When I hack it and delete twice (with a timeout) it seems to work:
function removeForce(screenIndex, boxId) {
setTimeout(function () {
API.removeBox(screenIndex, boxId);
}, 1000);
return API.removeBox(screenIndex, boxId);
}
and the API.removeBox:
function removeBox(screenIndex, boxId) {
var boxRef = screens
.child("s-" + screenIndex)
.child("boxes")
.child(boxId);
return boxRef.remove();
}
When you remove something from firebase it is asynchronous. Per the docs the proper way to remove an item is from firebase, using AngularFire is:
var obj = $firebaseObject(ref);
obj.$remove().then(function(ref) {
// data has been deleted locally and in the database
}, function(error) {
console.log("Error:", error);
});
$remove() ... Removes the entire object locally and from the database. This method returns a promise that will be fulfilled when the data has been removed from the server. The promise will be resolved with a Firebase reference for the exterminated record.
Link to docs: https://www.firebase.com/docs/web/libraries/angular/api.html#angularfire-firebaseobject-remove
The most likely cause is that you have a security rules that disallows the deletion.
When you call boxes.$remove Firebase immediately fires the child_removed event locally, to ensure the UI is updated quickly. It then sends the command to the Firebase servers to check it and update the database.
On the server there is a security rule that disallows this deletion. The servers send a "it failed" response back to the client, which then raises a child_added event to fix the UI.
Appearantly I was saving the items again after deleting them. Clearly my mistake:
function removeSelected(boxes) {
var selectedBoxes = Selector.getSelectedBoxes(boxes);
angular.forEach(selectedBoxes, function (box) {
BoxManager.remove(box);
});
Selector.clearSelection(boxes, true);
}
In the clearSelection method I was updating a field on the boxes and saved them again.
Besides the obvious mistake this is a lesson for me on how to work with Firebase. If some part of the system keeps a copy of your deleted item, saving it won't produce a bug but revive the deleted item.
For those, who have the similar issue, but didn't solve it yet.
There are two methods for listening events: .on() and .once(). In my case that was the cause of a problem.
I was working on a migration procedure, that should run once
writeRef
.orderByChild('text_hash')
.equalTo(addItem.text_hash)
.on('value', val => { // <--
if (!val.exists()) {
writeRef.push(addItem)
}
});
So the problem was exactly because of .on method. It fires each time after a data manipulation from FB's console.
Changing to .once solved that.

Categories