I followed several hundred links, most including stackoverflow links, to try to come up with a solution to this problem, but none yielded results.
I am simply trying to get the server to access client's detail via google. Ie, get the client's Google Sheets. I followed their documentation, but for the most part, its on client side only. I followed the instructions for server-side, but it has uncompleted work in it. I found out that the method to do is to have the client sign in via OAuth2.0 and then send the recieved code to the server to process to its very own access code. That is what I'm doing, however, when I try to query any data, I get that error in the title. Here is the code snippets, please let me know if there's anything I'm missing. RIP my rep.
server:
const Router=require("express").Router()
const auth=require("../utils/auth")
const fs= require("fs")
const {OAuth2Client}=require("google-auth-library")//I tried with this library instead, and it will give the exact same error.
const {google} = require('googleapis');
var auths=[]
function oAuth2ClientGetToken(oAuth2Client, code) {
return new Promise((resolve, reject) => {
oAuth2Client.getToken(code, (err, token) => { // errors out here
if (err) reject(err);
resolve(token);
});
});
}
async function formAuthClient(code) {
const {client_secret, client_id,redirect_uris} = JSON.parse(fs.readFileSync(__dirname+"/credentials.json"))
const oAuth2Client = new google.auth.OAuth2( // form authObject
client_id, client_secret,redirect_uris[1]
);
// var oauth2Client = new OAuth2Client(client_id,client_secret,redirect_uris[1]); other method
const token = await oAuth2ClientGetToken(oAuth2Client, code).catch(console.err);
// oauth2Client.credentials=token other method of oauth2.0
oAuth2Client.setCredentials(token);
return oAuth2Client;
}
Router.get("/",(req,res)=>{
res.render("home")
})
Router.post("/sheet",async (req,res)=>{
try {
const requestBody = {
properties: {
title:"hello"
}
};
var sheets= google.sheets({version:"v4", auth: auths[req.session.id]})
await sheets.spreadsheets.create(requestBody)
} catch (error) {
res.json(error)
}
})
Router.post("/login",(req,res)=>{
console.log("token: ",req.body.token);
req.session.token=req.body.token
console.log("req.session.id:",req.session.id);
auths[req.session.id]=formAuthClient(req.body.token)
res.status(200).json()
})
module.exports=Router
the client scripts is a simple button that will trigger an "getOfflineAccess" command and ask the user to log in and then send that data to the server with "/login". then, once another button is pushed, it will call "/sheet". I appreciate all help with this. I ran out of links to click on trying to solve this problem
Related
I'm currently attempting to use Supabase's JavaScript API to update a row in my 'profiles' database, which has RLS on, via my backend.
This is being done following Stripe sending me a webhook indicating a payment has been successful.
I won't put the full API call in, but here is my Supabase code:
const supabaseUrl = process.env.REACT_APP_SUPABASE_URL
const supabaseAnonKey = process.env.REACT_APP_SUPABASE_ANON_KEY
const supabase = createClient(supabaseUrl, supabaseAnonKey)
module.exports = async (req, res) => {
if (event.type === "checkout.session.completed") {
const userId = String(event.data.object.client_reference_id)
const { error } = await supabase.from('profiles').update({ premium: 'true' }).eq('id', userId)
if (error) {
console.log(error)
}
}
}
However, every time I try to run this, I get a 404 error. This seems to be because I have RLS on.
As a result, I have two questions:
Is it safe for me to turn RLS off?
How can I adjust my code / apply a new database policy to allow this to be accepted?
I'm getting a 431 (headers fields too large) on some API calls within a fullstack Next JS project. This only occurs on a dynamic API route (/author/get/[slug]), same result with both frontend and Postman. The server is running on local, and other endpoints works fine with exactly the same fetching logic.
The request is not even treated by Next API, no log will appear anywhere.
The database used is mongoDB. The API is pure simple JS.
The objective is to get a single author (will evolve in getStaticProps)
The API call looks like this (no headers whatsoever):
try {
const res = await fetch(`http://localhost:3000/api/author/get/${slug}`, { method: "GET" })
console.log(res)
} catch (error) { console.log(error) }
And the endpoint:
// author/get/[slug].js
import {getClient} from "../../../../src/config/mongodb-config";
export default async function handler(req, res) {
const { query } = req
const { slug } = query
if(req.method !== 'GET') {
return
}
const clientPromise = await getClient()
const author = clientPromise.db("database").collection("authors").findOne({ 'slug': slug })
res.status(200).json(author)
await clientPromise.close()
}
Tried without success:
To remove a nesting level (making the path /author/[slug])
i have a node server. I pass a Url into request and then extract the contects with cherio. Now what im trying to do is detect if that webpage is using google analytics. How would i do this?
request({uri: URL}, function(error, response, body)
{
if (!error)
{
const $ = cheerio.load(body);
const usesAnalytics = body.includes('googletag') || body.includes('analytics.js') || body.includes('ga.js');
const isUsingGA = ?;
}
}
From the official analytics site, they say that you can find some strings that would indicate GA is active. I have tried scanning the body for these but they always return false even if that page is running GA. I included this in the code above.
Ive looked at websites that use it and I cant see anything in their index that would suggest they are using it. Its only when i go to their sources and see they are using it. How would i detect this in node?
I have Node script which uses Puppeteer to monitor the requests sent from a website.
I wrote this some time ago so some parts might be irrelevant to you but here you go:
'use strict';
const puppeteer = require('puppeteer');
function getGaTag(lookupDomain){
return new Promise((resolve) => {
(async() => {
var result = [];
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', request => {
const url = request.url();
const regexp = /(UA|YT|MO)-\d+-\d+/i;
// look for tracking script
if (url.match(/^https?:\/\/www\.google-analytics\.com\/(r\/)?collect/i)) {
console.log(url.match(regexp));
console.log('\n');
result.push(url.match(regexp)[0]);
}
request.continue();
});
try {
await page.goto(lookupDomain);
await page.waitFor(9000);
} catch (err) {
console.log("Couldn't fetch page " + err);
}
await browser.close();
resolve(result);
})();
})
}
getGaTag('https://store.google.com/').then(result => {
console.log(result)
})
Running node ga-check.js now returns the UA ID of the Google Analytucs tracker on the lookup domain: [ 'UA-54090495-1' ] which in this case is https://store.google.com
Hope this helps!
My Firebase web project has been working for several months now. But on Sunday June 3, 2018, my application stopped sending tweets with media (images) attached. Before this, it was working for several months. I have not changed the failing code before the 3rd and I have even reverted to code that worked before that date but the app still fails :(
SDK versions:
I am using the most up to date versions of Firebase tools (3.18.6), and cloud functions (1.0.3). Along with twit (2.2.10) a javascript library for twitter api.
Do note my project was also working on older versions of the above including pre v1.0 cloud functions. Also note, I am still able to send regular text tweets just not ones with any media (image,gif,mp4).
This mainly relates to Twitter's API, but I cannot rule out something funky going on in Firebase's Node.js environment.
How to reproduce:
For simplicity, I will link to the code in a tutorial which I originally used when starting the project.
Setup a twitter account and retrieve all the necessary tokens as outlined in the tutorial. Then simply call the cloud function below and it will attempt to tweet the NASA image of the day.
The function is able to upload the picture to the twitter server and the response I get is expected:
{ media_id: 1004461244487643100,
media_id_string: '1004461244487643136',
media_key: '5_1004461244487643136',
size: 92917,
expires_after_secs: 86400,
image: { image_type: 'image/jpeg', w: 960, h: 1318 } }
However, once it attempts to post the tweet with the media attached, I receive an error
code 324: 'Unsupported raw media category'
which doesn't exist in Twitter's docs: https://developer.twitter.com/en/docs/basics/response-codes.html
Now, Code 324 does exist but in Twitter's docs there is a different description:
"The validation of media ids failed"
Which I have yet to receive. So my media id is valid, so something else is wrong? No where on the internet can I find someone with this exact error.
Link to tutorial code:
https://medium.freecodecamp.org/how-to-build-and-deploy-a-multifunctional-twitter-bot-49e941bb3092
Javascript code that reproduces the issue:
**index.js
'use strict';
const functions = require('firebase-functions');
const request = require('request');
const path = require('path');
const os = require('os');
const fs = require('fs');
const tmpDir = os.tmpdir(); // Ref to the temporary dir on worker machine
const Twit = require('twit');
const T = new Twit({
consumer_key: 'your twitter key'
,consumer_secret: 'your twitter secret'
,access_token: 'your twitter token'
,access_token_secret: 'your twitter token secret'
});
exports.http_testMediaTweet = functions.https.onRequest((req, res) => {
function getPhoto() {
const parameters = {
url: 'https://api.nasa.gov/planetary/apod',
qs: {
api_key: 'DEMO_KEY'
},
encoding: 'binary'
};
request.get(parameters, (err, response, body) => {
if (err) {console.log('err: ' + err)}
body = JSON.parse(body);
var f = path.join(tmpDir, 'nasa.jpg');
saveFile(body, f);
});
}
function saveFile(body, fileName) {
const file = fs.createWriteStream(fileName);
request(body).pipe(file).on('close', err => {
if (err) {
console.log(err)
} else {
console.log('Media saved! '+body.title)
const descriptionText = body.title
uploadMedia(descriptionText, fileName);
}
})
}
function uploadMedia(descriptionText, fileName) {
const filePath = path.join(__dirname, `../${fileName}`)
console.log(`uploadMedia: file PATH ${fileName}`)
T.postMediaChunked({
file_path: fileName
}, (err, data, respone) => {
if (err) {
console.log(err)
} else {
console.log(data)
const params = {
status: descriptionText,
media_ids: data.media_id_string
}
postStatus(params);
}
})
}
function postStatus(params) {
T.post('statuses/update', params, (err, data, respone) => {
if (err) {
console.log(err)
res.status(500).send('Error: ' + err);
} else {
console.log('Status posted!')
res.status(200).send('success');
}
})
}
// Do thing
getPhoto();
});
I was hoping to launch my app next week but this has become a major issue for me. I've tried everything I can think of and consulted the docs for Twitter and the js library but I seem to be doing everything right. Hopefully someone can shed some light on this, thanks.
The Goal: User uploads to S3, Lambda is triggered to take the file and send to Google Vision API for analysis, returning the results.
According to this, google-cloud requires native libraries and must be compiled against the OS that lambda is running. Using lambda-packager threw an error but some internet searching turned up using an EC2 with Node and NPM to run the install instead. In the spirit of hacking through this, that's what I did to get it mostly working*. At least lambda stopped giving me ELF header errors.
My current problem is that there are 2 ways to call the Vision API, neither work and both return a different error (mostly).
The Common Code: This code is always the same, it's at the top of the function, and I'm separating it to keep the later code blocks focused on the issue.
'use strict';
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
const Bucket = 'my-awesome-bucket';
const gCloudConfig = {
projectId: 'myCoolApp',
credentials: {
client_email: 'your.serviceapi#project.email.com',
private_key: 'yourServiceApiPrivateKey'
}
}
const gCloud = require('google-cloud')(gCloudConfig);
const gVision = gCloud.vision();
Using detect(): This code always returns the error Error: error:0906D06C:PEM routines:PEM_read_bio:no start line. Theoretically it should work because the URL is public. From searching on the error, I considered it might be an HTTPS thing, so I've even tried a variation on this where I replaced HTTPS with HTTP but got the same error.
exports.handler = (event, context, callback) => {
const params = {
Bucket,
Key: event.Records[0].s3.object.key
}
const img = S3.getSignedUrl('getObject', params);
gVision.detect(img, ['labels','text'], function(err, image){
if(err){
console.log('vision error', err);
}
console.log('vision result:', JSON.stringify(image, true, 2));
});
}
Using detectLabels(): This code always returns Error: ENAMETOOLONG: name too long, open ....[the image in base64].... On a suggestion, it was believed that the method shouldn't be passed the base64 image, but instead the public path; which would explain why it says the name is too long (a base64 image is quite the URL). Unfortunately, that gives the PEM error from above. I've also tried not doing the base64 encoding and pass the object buffer directly from aws but that resulted in a PEM error too.
exports.handler = (event, context, callback) => {
const params = {
Bucket,
Key: event.Records[0].s3.object.key
}
S3.getObject(params, function(err, data){
const img = data.Body.toString('base64');
gVision.detectLabels(img, function(err, labels){
if(err){
console.log('vision error', err);
}
console.log('vision result:', labels);
});
});
}
According to Best Practices, the image should be base64 encoded.
From the API docs and examples and whatever else, it seems that I'm using these correctly. I feel like I've read all those docs a million times.
I'm not sure what to make of the NAMETOOLONG error if it's expecting base64 stuff. These images aren't more than 1MB.
*The PEM error seems to be related to credentials, and because my understanding of how all these credentials work and how the modules are being compiled on EC2 (which doesn't have any kind of PEM files), that might be my problem. Maybe I need to set up some credentials before running npm install, kind of in the same vein as needing to be installed on a linux box? This is starting to be outside my range of understanding so I'm hoping someone here knows.
Ideally, using detect would be better because I can specify what I want detected, but just getting any valid response from Google would be awesome. Any clues you all can provide would be greatly appreciated.
So, a conversation with another colleague pointed me to consider abandoning the whole loading of the API and using the google-cloud module. Instead, I should consider trying the Cloud REST API via curl and seeing if it can work that way.
Long story short, making an HTTP request and using the REST API for Google Cloud was how I solved this issue.
Here is the working lambda function I have now. Probably still needs tweaks but this is working.
'use strict';
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
const Bucket = 'yourBucket';
const fs = require('fs');
const https = require('https');
const APIKey = 'AIza...your.api.key...kIVc';
const options = {
method: 'POST',
host: `vision.googleapis.com`,
path: `/v1/images:annotate?key=${APIKey}`,
headers: {
'Content-Type': 'application/json'
}
}
exports.handler = (event, context, callback) => {
const req = https.request(options, res => {
const body = [];
res.setEncoding('utf8');
res.on('data', chunk => {
body.push(chunk);
});
res.on('end', () => {
console.log('results', body.join(''));
callback(null, body.join(''));
});
});
req.on('error', err => {
console.log('problem with request:', err.message);
});
const params = {
Bucket,
Key: event.Records[0].s3.object.key
}
S3.getObject(params, function(err, data){
const payload = {
"requests": [{
"image": {
"content": data.Body.toString('base64')
},
"features": [{
"type": "LABEL_DETECTION",
"maxResults": 10
},{
"type": "TEXT_DETECTION",
"maxResults": 10
}]
}]
};
req.write(JSON.stringify(payload));
req.end();
});
}