I am currently playing around with a face recognition in JavaScript.
I am new in JavaScript and faced some problems and I hope you can help me out with this. In my project I use following API: https://justadudewhohacks.github.io/face-api.js/docs/index.html. The API is also correctly embedded in the project
My Project:
User upload a picture on a HTML page with input type:
<input class="chooseFile" type="file" id="preview">
Working: The project detect the faces in the uploaded picture
Not working: Project should recognize which person is on the uploaded picture
My Code so far (in script.js file) when I try to load my labeled images:
async function loadLabledImages() {
console.log("load labeled images");
let labels = ["BlackWidow", "CaptainAmerica", "CaptainMarvel",
"Hawkeye", "JimRhodes", "Thor", "TonyStark"];
return Promise.all(labels.map( async label => {
let descriptions = [];
for(let i = 1; i <= 2; i++) {
let filePath = `labeled_images/${label}/${i}.jpg`;
let imgAsBase64String = await imgToBase64(filePath);
let imgAsBlob = base64toBlob(imgAsBase64String, 'image/*');
let img = await faceapi.fetchImage(imgAsBlob);
let detections = await faceapi.detectSingleFace(img)
.withFaceLandmarks().withFaceDescriptor();
descriptions.push(detections.descriptor);
}
return new faceapi.LabledFaceDescriptors(label, descriptions);
}));
}
The labeled images are stored locally in the same directory.
The problem as far I know occurs on the faceapi.fetchImage(...) call.
When I run the HTML page with a Live Server I get following Error:
Failed to load resource: the server responded with a status of 404 (Not Found).
The network Header:
I also tried to store the folder labeled images on GitHub and to pass the link of the Github folder to the fetchImage function. I don't want the pictures to show on my HTML Page (just the uploaded picture).
Is there any solution how to use the fetchImage function with the locally stored images?
Related
I have a process where a client uploads a document. This document can be in the form of a PDF, JPG or PNG file only and it should be reuploaded once a year (it is an errors and omissions insurance policy).
I am saving this file in a container.
For deleting files from anywhere at the application, I have this function (Node):
deleteFromBlob = async function (account, accountKey, containerName, blobFolder, blobName) {
try {
const {
BlobServiceClient,
StorageSharedKeyCredential
} = require("#azure/storage-blob");
const sharedKeyCredential = new StorageSharedKeyCredential(account, accountKey);
const blobServiceClient = new BlobServiceClient(
`https://${account}.blob.core.windows.net`,
sharedKeyCredential
);
const containerClient = blobServiceClient.getContainerClient(containerName);
const blockBlobClient = containerClient.getBlockBlobClient(blobFolder + '/' + blobName);
const uploadblobResponse = await blockBlobClient.deleteIfExists()
return true
}
catch(e) {
return false
}
}
And this works perfect when I know the file name and extension I want to delete, like "2448.pdf":
let deleteFile = await utils.deleteFromBlob(account, accountKey, "agents", "/eopolicies/", userData.agentid.toString() + ".pdf" )
But the problem Im facing is that the function above is to delete a file I know exists; for example, if the agent ID is 2448 and he uploads "policy.pdf" I save it as "2448.pdf" for easy file identification.
The problem Im facing is if the agent uploaded a .PNG last year. a .DOC a year before, and a .PDF now. If that's the case, I want to delete 2448.* and keep only the latest version of the document.
So I tried changing my function to
let deleteFile = await utils.deleteFromBlob(account, accountKey, "agents", "/eopolicies/", userData.agentid.toString() + ".*" )
And of course it is not working...
I tried to find a solution and all I found is one to list the content of a folder, then loop it and delete the specific file I want; but that will not work for me since there are 37,000 EO policies on that folder.
Is there a way to delete files with a specific name, and whatever extension?
Thanks.
I've never tried using a wildcard on the extension side of the file name. However, I would iterate through the files in the directory and find the one that contains the specific string you are looking for. Get it's index, and delete from there.
import * as IPFS from 'ipfs-core'
var ipfs = await IPFS.create({ repo: 'ok' + Math.random() })
const metadataMap = new Map()
//content.metadataCid is the ipfs-cid of the metadata.json file stored on ipfs.
var res = await ipfs.cat(content.metadataCid)
//var data = await res.Json()
console.log("** Metadata from Cid **")
console.log(res)
//This just maps the content (content-cid) to its metadata
//sets the metadata for each Content
metadataMap.set(theCid, res)
if (metadataMap) {
console.log('**metadata map**')
console.log(metadataMap)
}
The console -
I am hosting those metadata files on pinata as well as on my desktop-ipfs.
And you can access it using ipfs cli or gateways.
eg_link: ipfs://bafybeibro7fxpk7sk2nfvslumxraol437ug35qz4xx2p7ygjctunb2wi3i/
The link opens in the browser using ipfs-gateway just fine:
But when I use ipfs.cat(), the console just shows cat {suspended} (as shown in the image attached)]
I can access the image stored on ipfs using "img" tag without any problem:
In the image above, the images are from ipfs.
I also wanna show title and description which is stored in that json file on ipfs
Same issue with ipfs.get() !
How can I access that metadata.json file and parse them.
Am I missing any step here?
thanks 🤞
Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});
I am trying to use the MS Graph API and ReactJS to download a file from SharePoint and then replace the file. I have managed the download part after using the #microsoft.graph.downloadUrl value. Here is the code that gets me the XML document from SharePoint.
export async function getDriveFileList(accessToken,siteId,driveId,fileName) {
const client = getAuthenticatedClient(accessToken);
//https://graph.microsoft.com/v1.0/sites/{site-id}/drives/{drive-id}/root:/{item-path}
const files = await client
.api('/sites/' + siteId + '/drives/' + driveId + '/root:/' + fileName)
.select('id,name,webUrl,content.downloadUrl')
.orderby('name')
.get();
//console.log(files['#microsoft.graph.downloadUrl']);
return files;
}
When attempting to upload the same file back up I get a 404 itemNotFounderror return. Because this user was able to get it to work I think I have the MS Graph API correct, although I am not sure I'm translating correctly to ReactJS syntax. Even though the error message says item not found I think MS Graph might actually be upset with how I'm sending the XML file back. The Microsoft documentation for updating an existing file state the contents of the file in a stream should be returned. Since I've loaded the XML file into the state I'm not entirely sure how to send it back. The closest match I found involved converting a PDF to a blob so I tried that.
export async function putDriveFile(accessToken,siteId,itemId,xmldoc) {
const client = getAuthenticatedClient(accessToken);
// /sites/{site-id}/drive/items/{item-id}/content
let url = '/sites/' + siteId + '/drive/items/' + itemId + '/content';
var convertedFile = null;
try{
convertedFile = new Blob(
[xmldoc],
{type: 'text/xml'});
}
catch(err) {
console.log(err);
}
const file = await client
.api(url)
.put(convertedFile);
console.log(file);
return file;
}
I'm pretty sure it's the way I'm sending the file back but the Graph API has some bugs so I can't entirely be sure. I was convinced I was getting the correct ID of the drive item but I've seen where the site ID syntax can be different with the Graph API so maybe it is the item ID.
The correct syntax for putting an (existing) file into a document library in SharePoint is actually PUT /sites/{site-id}/drive/items/{parent-id}:/{filename}:/content I also found this code below worked for taking the XML document and converting into a blob that could be uploaded
var xmlText = new XMLSerializer().serializeToString(this.state.xmlDoc);
var blob = new Blob([xmlText], { type: "text/xml"});
var file = new File([blob], this.props.location.state.fileName, {type: "text/xml",});
var graphReturn = await putDriveFile(accessToken, this.props.location.state.driveId, this.state.fileId,file);
I am making a phonegap app for a offline books app. Here i am saving some files in a folder in the root of the device and display all the files in that folder for user to choose. when user clicks on a file to load it i am taking the full path of the file and giving to another javascript function to load it.
The files are saved in myappfolder in the root of the phone and i am getting the file path currectly.
My code is below:
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0,
function(fileSystem) {
fileSystem.root.getDirectory("myappfolder",
{
create: true
}, function(directory) {
var directoryReader = directory.createReader();
directoryReader.readEntries(function(entries) {
if(entries.length > 0 ) {
for (i=0; i<entries.length; i++) {
var name = entries[i].name;
var div1 = document.createElement('div');
div1.className = 'books';
div1.id = entries[i].fullPath;
div1.innerHTML = entries[i].name +'<hr>';
document.getElementById('content'). appendChild(div1);
div1.onclick =function (){
redirect(this.id);
}
}
}
When i use this code to load the file i get
network error in file://myappfolder/file.jpg
Kindly help me in solving this as i am new to phonegap.
I am passing this file to another js function which will display this file in a flip book format, It is working fine for me if i give a static file in the app folder
After lots of research i found the solution:
The fullPath gives the path only from the root directory, But if you give
entries[i].toNativeUrl
It is giving the complete path of the file The result of the above code is
file:///storage/emulated/0/myappfolder/name.pdf.
This result came in my phone(samsung galaxy s4)
it works fine in iphone also.
Hope this helps some one else.