I'm a beginner in developpement, and for my personal knowledge I try to make a website in html, css and js.
The website get a random project from Artstation with Axios at a Json and display it on the web page, I have 24 random image generated
I share you the Js code :
const images = document.getElementsByClassName("pic");
const redirect1 = document.getElementsByClassName("redirect1");
const redirect2 = document.getElementsByClassName("redirect2");
randomButton.addEventListener("click", function () {
for (i = 0; i < images.length; i++) {
images[i].src = "small_square.png";
loadImages(i);
}
});
async function loadImages(i) {
const response = await axios.get("https://www.artstation.com/random_project.json");
images[i].src = response.data.cover.smaller_square_image_url;
redirect1[i].href = response.data.permalink;
redirect2[i].href = response.data.permalink;
}
Don't juge it I know is not verry clean, now I explain my problem, I made the code like that for generate the 24 images simultaneously, the problem is if I not open my console in google chrome he load the function one by one, this is a issue I don't have with firefox
I have made the result in video
Did you know how fix that ?
Related
I had to place videos(mp4-files) in one photoshop document. I thought it would be easier to find a solution with png/jpg, and then project it on mp4. but the fact is that photoshop saving png/jpg and mp4 in different ways. Therefore, despite the fact that there is an import solution, I have difficulties with exporting mp4 by code.
I have 2 arrays of mp4 files and each mp4 from the first array needs to be overlaid on each of the second and saved by mp4. I solved the problem by uploading a video to an open photoshop file with a simple code:
function replaceContents(newFile) {
var docRef = app.open(newFile);
return docRef;
}
function importVideos(order_number) {
var doc = app.activeDocument;
var file = new File('E:/path/' + order_number + '.mp4');
// open a new document with needed video
var docTemp = replaceContents(file);
// copy opend layer with video from new doc to my main doc
var layer = docTemp.activeLayer.duplicate(doc.layerSets.getByName(color), ElementPlacement.PLACEATEND);
// close new unnecessary doc
docTemp.close(SaveOptions.DONOTSAVECHANGES);
layer.name = order_number;
return layer;
}
Here is the code for saving videos and in doExport() doc should be saved as a video.
function Saving(color) {
var array1 = app.activeDocument.layerSets.getByName('s');
var array2 = app.activeDocument.layerSets.getByName(color);
for (i = 0; i < 5; i++) {
array1.artLayers[i].visible = true;
for (j = 0; j < 5; j++) {
array2.artLayers[i].visible = true;
doExport();
array2.artLayers[i].visible = false;
}
array1.artLayers[i].visible = false;
}
}
So a new question: how to export a video from photoshop with a code with the ability to specify the file name and the save path?
P.S. if you do this through Actions, you can't enter input parameters like the name of the saved file, it seals the Action as you did it.
If you know how to create arguments for Actions, you are welcome!
So i tried to list all my content in container, but it keeps crashing my app.
I tried a sample code from documentation, but it still happens:
async function listBlobs(blobServiceClient, containerName) {
const containerClient = blobServiceClient.getContainerClient(containerName);
let i = 1;
let blobs = containerClient.listBlobsFlat();
for await (const blob of blobs) {
console.log(`Blob ${i++}: ${blob.name}`);
}
}
blobServiceClient - seems to work fine, and i found that problem is in blob.name, but how to fix it idk. Help me
Here`s how it looks inside container
I am trying to upload multiple images to firestore in my nodeJS server-side code.
I initially implemented it with the firestore bucket API
admin.storage.bucket().upload()
I am placing the above code in a for loop.
for(let x = 0; images.length > x; x++){
admin.storage.bucket().upload(filepath, {options}).then(val => {
//get image download URL and add to a list
imageUrls.push(url);
if(images.length == x+1){
// break out of loop and add the imagesUrls list to firestore
}
})
}
but what happens is that the code sometimes doesn't add all the image urls to the imageUrls list and I'll have only 1 or 2 image urls saved to firestore while in firestorage I see it uploaded all the images.
I understand that uploading takes some time and would like to know the best way to implement this as I assumed the .then() method is an async approach and would take care of any await instances.
Your response would be highly appreciated.
It's not clear how you get the values of filepath in the for block, but let's imagine that images is an array of fully qualified paths to the images you wish to upload to the bucket.
The following should do the trick (untested):
const signedURLs = [];
const promises1 = [];
images.forEach(path => {
promises.push(admin.storage.bucket().upload(filepath, {options}))
})
Promise.all(promises1)
.then(uploadResponsesArray => {
const promises2 = [];
const config = { // See https://googleapis.dev/nodejs/storage/latest/File.html#getSignedUrl
action: 'read',
expires: '03-17-2025',
//...
};
uploadResponsesArray.forEach(uploadResponse => {
const file = uploadResponse[0];
promises2.push(file.getSignedUrl(config))
})
return Promise.all(promises2);
})
.then(getSignedUrlResponsesArray => {
signedURLs.push(getSignedUrlResponsesArray[0])
});
// Do whatever you want with the signedURLs array
I have two Office.js applications:
Prototype written in pure javascript
React application
I can successfully call an end-point in the pure javascript application that downloads an MSWord document. This file is whole, complete and un-corrupted.
However, virtually identical code in the React application, calling the same end-point, downloading the same MSWord document returns a slightly larger data length (42523 vs 40554), The data begins identically, then changes as follows:
Working snippet of the document data
80,75,3,4,20,0,6,0,8,0,0,0,33,0,70,117,100,65533,1,0,0,32,8,0,0,19,0,8,2,91,67,111,110,116,101,110,116,95,84,121,...
Corrupted snippet of the document data
80,75,3,4,20,0,6,0,8,0,0,0,33,0,70,117,100,63462,63411,1,0,0,32,8,0,0,19,0,8,2,91,67,111,110,116,101,110,116,95,...
In my (working) pure javascript application the code looks like this:
downloadPath = "https://myserver.com/seal-ws/v5/downloads/6bb4dfd7e0a528dc68f2069f9d5da5a732692f49";
var xhr = new XMLHttpRequest;
xhr.open("GET", downloadPath);
xhr.addEventListener("load", function () {
var ret = [];
var len = this.responseText.length;
let trace = '';
for (let i = 0; i < len; i += 1) {
trace += this.responseText.charCodeAt(i) + ",";
}
console.log(trace);
console.log(len);
}, false);
xhr.setRequestHeader("X-Session-Token", XSessionToken);
xhr.overrideMimeType("octet-stream; charset=x-user-defined;");
xhr.send(null);
In the React application the code that returns the corrupt file looks like this:
const downloadPath = "https://myserver.com/seal-ws/v5/downloads/6bb4dfd7e0a528dc68f2069f9d5da5a732692f49";
const xhr = new XMLHttpRequest;
xhr.open("GET", downloadPath);
xhr.addEventListener("load", function () {
const ret = [];
const len = this.responseText.length;
let trace = '';
for (let i = 0; i < len; i += 1) {
trace = `${trace}${this.responseText.charCodeAt(i)},`
}
console.log(trace);
console.log(len);
}, false);
xhr.setRequestHeader("X-Session-Token", XSessionToken);
xhr.overrideMimeType("octet-stream; charset=x-user-defined;");
xhr.send(null);
I've used fiddlr to inspect the outgoing requests from both applications, and both look well formed and identically to one another. I don't understand why the response is being corrupted in the React application with what looks like near identical code?
It's not a browser difference, as I've tested with IE on both applications. The only thing I can think of is that the prototype is using a different version of the javascript file for the XMLHttpRequest object.
Pure Javascript App:
C:\Program Files (x86)\Microsoft Visual Studio 14.0\JavaScript\References\domWeb.js
React app:
C:\Users\\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\node_modules\typescript\lib\lib.dom.d.ts
Any ideas?
The above is a simplified version of what I'm trying to achieve in-order to illustrate the slight corruption in the data between the two approaches. Ultimately I'm trying to achieve a MSWord document download followed by an open new instance command as the following code describes. As detailed above, the following code workings in the prototype pure javascript application, but has this slight corruption in the React app. The resulting document opens correctly from pure javascript and fails to open the corrupted version from the React app. I'm sure it's nothing to do with React as a framework, but I struggling to understand what differences there could be that would cause the resulting data to be mis-decoded in this way:
const downloadPath = "https://myserver.com/seal-ws/v5/downloads/6bb4dfd7e0a528dc68f2069f9d5da5a732692f49";
const xhr = new XMLHttpRequest;
xhr.open("GET", downloadPath);
/* eslint-disable no-bitwise */
xhr.addEventListener("load", function () {
const ret = [];
const len = this.responseText.length;
console.log('len');
console.log(len);
let trace = '';
for (let i = 0; i < len; i += 1) {
trace = `${trace}${this.responseText.charCodeAt(i)},`
}
console.log(trace);
let byte;
for (let i = 0; i < len; i += 1) {
byte = (this.responseText.charCodeAt(i) & 0xFF) >>> 0;
ret.push(String.fromCharCode(byte));
}
let data = ret.join('');
data = btoa(data);
console.log(data);
Word.run(context => {
const myNewDoc = context.application.createDocument(data);
context.load(myNewDoc);
return context.sync().then(() => {
context.sync();
myNewDoc.open();
});
});
}, false);
xhr.setRequestHeader("X-Session-Token", XSessionToken);
xhr.overrideMimeType("octet-stream; charset=x-user-defined;");
xhr.send(null);
Since two snippets that generate different outputs have nothing to do with React, try to just copy paste it to have the identical snippet in both projects.
I guess, the problem is that you do this:
trace = `${trace}${this.responseText.charCodeAt(i)},`
Which duplicates trace value instead of just accumulating new characters' codes on each iteration.
I wanted to do client side scrpting for merging and splitting pdf, so i wanted to use itextsharp. Can that be used with javascript. I am new to Javascript. Please help me with your valuable suggestions.
I found an entirely client-side solution using the PDF-LIB library: https://pdf-lib.js.org/
It uses the function mergeAllPDFs which takes one parameter: urls, which is an array of urls to the files.
Make sure to include the following in the header:
<script src='https://cdn.jsdelivr.net/npm/pdf-lib/dist/pdf-lib.js'></script>
<script src='https://cdn.jsdelivr.net/npm/pdf-lib/dist/pdf-lib.min.js'></script>
Then:
async function mergeAllPDFs(urls) {
const pdfDoc = await PDFLib.PDFDocument.create();
const numDocs = urls.length;
for(var i = 0; i < numDocs; i++) {
const donorPdfBytes = await fetch(urls[i]).then(res => res.arrayBuffer());
const donorPdfDoc = await PDFLib.PDFDocument.load(donorPdfBytes);
const docLength = donorPdfDoc.getPageCount();
for(var k = 0; k < docLength; k++) {
const [donorPage] = await pdfDoc.copyPages(donorPdfDoc, [k]);
//console.log("Doc " + i+ ", page " + k);
pdfDoc.addPage(donorPage);
}
}
const pdfDataUri = await pdfDoc.saveAsBase64({ dataUri: true });
//console.log(pdfDataUri);
// strip off the first part to the first comma "data:image/png;base64,iVBORw0K..."
var data_pdf = pdfDataUri.substring(pdfDataUri.indexOf(',')+1);
}
There are several client-side JavaScript libraries supporting merging and splitting existing PDFs that I'm aware of which might be useful to you:
PDF Assembler supports this and has a live demo. The Prior Art / Alternatives part of its README is worth checking out, where several other JavaScript PDF libraries (not necessarily client-side) are mentioned and commented on.
pdf-lib is a newer library under active development. Similarly, the Prior Art part is worth checking out.
If you just want to display multiple PDFs as merged into a single document in the browser, this is surely possible with pdf.js - see my answer here.
Surely that example could also be used to show a specific subset of pages, thus giving the user the ability to split it.
However, if you need the result to be available for download, there's no way (afaik) around server-side processing - at least if you want to stay in the open source, free of charge realm.
Update: Using a combination of pdf.js (Mozilla) to render the pdf - which happens on the canvas by default - and jsPdf (parall.ax) one should be able to get the merged result (or even anything else that was drawn on your canvas/es) for download, printing, etc. by using the canvas' toDataUrl() approach found this answer to export page by page (using jsPdf's addPage function)
If you want to do a merge operation then you can use easy-pdf-merge below is the link :
https://www.npmjs.com/package/easy-pdf-merge
Here is an example:
var merge = require('easy-pdf-merge');
merge(['File One.pdf','File Two.pdf'],'File Ouput.pdf',function(err){
if(err)
return console.log(err);
console.log('Successfully merged!');
});
Hope this helps.
No, you can't use iTextSharp (a .Net port of iText, which was written in Java) with JavaScript in a browser.
You could use iText in a Java applet, or there are a couple of PDF libraries for JavaScript if you search (mostly experimental ones, I understand, such as this one Mozilla did, or this one).
You can use pure JS with pdfjs-dist:
function pdfMerge(urls, divRootId) {
//necessário pois para manter as promisses sincronizadas com await
(async function loop() {
for (url_item of urls) {
console.log("loading: " + url_item);
var loadingTask = pdfjsLib.getDocument(url_item);
//sem isso fica dessincronizado
await loadingTask.promise.then(function (pdf) {
pdf.getMetadata().then(function (metaData) {
console.log("pdf (" + urls + ") version: " + metaData.info.PDFFormatVersion); //versão do pdf
}).catch(function (err) {
console.log('Error getting meta data');
console.log(err);
});
console.log("páginas: " + pdf.numPages);
let i = 0;
while (i < pdf.numPages) {
var pageNumber = i;
pdf.getPage(pageNumber).then(function (page) {
var div = document.createElement("div");
var documentosDiv = document.querySelector('#' + divRootId);
documentosDiv.appendChild(div);
var canvas = document.createElement("canvas");
div.appendChild(canvas);
// Prepare canvas using PDF page dimensions
var viewport = page.getViewport({scale: 1, });
//var canvas = document.getElementById('the-canvas');
var context = canvas.getContext('2d');
canvas.height = viewport.height;
canvas.width = viewport.width;
// Render PDF page into canvas context
var renderContext = {
canvasContext: context,
viewport: viewport
};
var renderTask = page.render(renderContext);
renderTask.promise.then(function () {
console.log('Page rendered');
});
});
i++;
}
// Fetch the first page
}, function (reason) {
// PDF loading error
console.error(reason);
});
}
})();
}
html example:
<script src="pdf.js"></script>
<script src="pdf.worker.js"></script>
<h1>Merge example</h1>
<div id="documentos-container"></div>
<style>
canvas {
border:1px solid black;
}
</style>
Here https://codepen.io/hudson-moreira/pen/MWmpqPb you have a working example of adding multiple pdfs and combining them into one, all directly from the front-end using the Hopding/pdf-lib library. The layout needs some adjustment but in this example it is possible to dynamically add and remove files directly from the screen.
In short, this function combines the PDFs, but it is necessary to read them in Uint8Array format before applying this function.
async function joinPdf() {
const mergedPdf = await PDFDocument.create();
for (let document of window.arrayOfPdf) {
document = await PDFDocument.load(document.bytes);
const copiedPages = await mergedPdf.copyPages(document, document.getPageIndices());
copiedPages.forEach((page) => mergedPdf.addPage(page));
}
var pdfBytes = await mergedPdf.save();
download(pdfBytes, "pdfconbined" + new Date().getTime() + ".pdf", "application/pdf");
}
This is a library for javascript to generate PDFs. It works on the client:
http://parall.ax/products/jspdf#
Another solution would be to make an ajax call, generate the PDF on the server with whatever technology you want to use (itextsharp) and return the generated PDF to the client.
I achieved this using pdfjs on browser (i.e. compiled it to browser environment using parcel)
https://github.com/Munawwar/merge-pdfs-on-browser (folder pdfjs-approach)