Delete Item from DynamoDB not deleting - javascript

I can't for the life of me figure out why this isn't actually deleting the Item from the DB. It returns with a success as if it actually deleted the item but it's not actually performing the delete.
I'm using the Javascript SDK in a lambda using serverless framework
import * as dynamoDbLib from "./libs/dynamodb-lib";
import { success, failure } from "./libs/response-lib";
import { isEmpty } from 'lodash';
export function main(event, context, callback) {
const params = {
TableName: process.env.tableName,
// 'Key' defines the partition key and sort key of the item to be removed
// - 'tagId': path parameter
Key: {
tagId: event.pathParameters.id
}
};
try {
dynamoDbLib.call("delete", params).then((error) => {
if(!isEmpty(error)){
console.log(error)
callback(null, failure({ status: false, error }));
}else{
callback(null, success({ status: 204 }));
}
});
} catch (e) {
console.log(e)
callback(null, failure({ status: false }));
}
}
The dynamodb-lib file looks like this
import AWS from "aws-sdk";
AWS.config.update({ region: "us-east-1" });
export function call(action, params) {
const dynamoDb = new AWS.DynamoDB.DocumentClient();
return dynamoDb[action](params).promise();
}
Every other call so far, put and scan (to list all) are working just fine, delete is the only one so far that's not actually doing anything
EDIT: This is my table structure by the way
Resources:
NametagsTable:
Type: AWS::DynamoDB::Table
Properties:
# Generate a name based on the stage
TableName: ${self:custom.stage}-nametags
AttributeDefinitions:
- AttributeName: tagId
AttributeType: S
KeySchema:
- AttributeName: tagId
KeyType: HASH
# Set the capacity based on the stage
ProvisionedThroughput:
ReadCapacityUnits: ${self:custom.tableThroughput}
WriteCapacityUnits: ${self:custom.tableThroughput}
Here is the function in serverless.yml
delete:
# Defines an HTTP API endpoint that calls the main function in delete.js
# - path: url path is /tags/{id}
# - method: DELETE request
handler: delete.main
events:
- http:
path: tags/{id}
method: delete
cors: true

Related

Updating a Related Content-Type in Strapi using the `connect` operator

I'm learning Strapi and following these docs: https://docs.strapi.io/developer-docs/latest/developer-resources/database-apis-reference/rest/relations.html#connect
I expect to be able to specify the name of a relationship and then connect 1 or more records by ID during an update.
My code is as follows:
// DOES NOT WORK
/**
* Create the file record at root folder with file info.
*/
const file = await strapi.query(FILE_MODEL_UID).create({
data: {
folderPath: "/",
...fileInfo,
},
});
/**
* Update content-source record with new status.
*/
await strapi.query("api::content-source.content-source").update({
where: { uuid: jobName },
data: {
assets: {
connect: [file.id],
},
},
});
This operation fails to 1) throw an error, and b) establish the connection. However, using the set operation works, while overwriting any prior relationships.
// DOES WORK
/**
* Create the file record at root folder with file info.
*/
const file = await strapi.query(FILE_MODEL_UID).create({
data: {
folderPath: "/",
...fileInfo,
},
});
/**
* Update content-source record with new status.
*/
await strapi.query("api::content-source.content-source").update({
where: { uuid: jobName },
data: {
assets: {
set: [file.id],
},
},
});
Is there something obvious I'm missing, or is this a bug?

"Error 400: The template parameters are invalid" - Running Dataflow Flex Template via Google Cloud Function

I'm trying to run a Dataflow Flex Template job via a Cloud Function which is triggered by a Pub/Sub message.
However while the Dataflow pipeline works fine when running it from gcloud / locally in the command line / via the Google Flex Template API Explorer, when I try to launch it as a Google Cloud Function I keep running up against this error:
"Problem running dataflow template, error was: { Error: The template parameters are invalid."
To note, I'm using all the same parameters as when I was testing it, and I've attempted to remove / reformat them all on a few occasions, and have passed them in as their actual strings rather than environment variables.
Now I had seen that this was due to incorrect formatting and/or using the incorrect parameter key names themselves - possibly because I've used Python to build the pipeline, and Node.js to run the cloud function i.e. "temp_location" vs "tempLocation" or similar.
Or I was wondering whether having set google_cloud_options in my main.py file doesn't allow me to specify them as a JS friendly argument when using the template at runtime?
But having tried various combinations this doesn't make much difference. So my question is: does anyone know a fix for this issue, alternatively does anyone know how to even try to debug which parameters are wrong? Do I need to be passing my GOOGLE_APPLICATION_CREDENTIALS service-account.json file as well?
Is the problem simply that the Cloud Function needs to be written in Python as well as the Dataflow pipeline?
Google Cloud Function
# trigger.js
const { google } = require("googleapis");
exports.triggerFlexTemplate = (event, context) => {
// PubSub message payloads and attributes not used.
const pubsubMessage = event.data;
console.log(Buffer.from(pubsubMessage, "base64").toString());
console.log(event.attributes);
google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
console.error("Error occurred: " + err.toString());
throw new Error(err);
}
const dataflow = google.dataflow({ version: "v1b3", auth: authClient });
const date = new Date().toISOString().slice(0, 10);
dataflow.projects.locations.flexTemplates.launch({
projectId: projectId,
location: process.env.region,
resource: {
launchParameter: {
jobName: `rank-stream-to-bigquery-${date}`,
containerSpecGcsPath: `gs://${process.env.bucket}/dataflow_templates/rank_stream_to_bigquery.json`,
parameters: {
api_key: process.env.api_key,
date: date,
campaign: process.env.campaign,
se: process.env.se,
domain: process.env.domain,
project: process.env.project,
dataset: process.env.dataset,
table_name: process.env.table_name,
runner: process.env.runner,
experiments: "disable_flex_template_entrypoint_override",
staging_location: `gs://${process.env.bucket}/staging`,
temp_location: `gs://${process.env.bucket}/temp`,
service_account_email: process.env.service_account_email,
}
}
}
},
function (err, response) {
if (err) {
console.error("Problem running dataflow template, error was: ", err);
} console.log("Dataflow template response: ", response);
});
});
};
Dataflow Pipeline
# main.py
from __future__ import print_function, absolute_import
import argparse
import logging
import sys
import apache_beam as beam
from apache_beam.io.gcp.internal.clients import bigquery
from apache_beam.metrics.metric import Metrics
from apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions, StandardOptions, SetupOptions, WorkerOptions
import datetime as dt
from datetime import timedelta, date
import time
import re
logging.getLogger().setLevel(logging.INFO)
SCHEMA = {'fields': [{ 'name': 'date', 'type': 'DATE', 'mode': 'REQUIRED' },
{ 'name': 'url', 'type': 'STRING', 'mode': 'NULLABLE' },
{ 'name': 'lp', 'type': 'STRING', 'mode': 'NULLABLE' },
{ 'name': 'keyword', 'type': 'STRING', 'mode': 'REQUIRED' },
{ 'name': 'se', 'type': 'STRING', 'mode': 'REQUIRED' },
{ 'name': 'se_name', 'type': 'STRING', 'mode': 'REQUIRED' },
{ 'name': 'search_volume', 'type': 'INTEGER', 'mode': 'NULLABLE' },
{ 'name': 'rank', 'type': 'INTEGER', 'mode': 'NULLABLE' }]}
class GetAPI():
def __init__(self, data={}):
self.num_api_errors = Metrics.counter(self.__class__, 'num_api_errors')
self.data = data
def get_job(self):
import requests
params = dict(
key = self.data.api_key,
date = self.data.date,
campaign_id = self.data.campaign,
se_id = self.data.se,
domain = self.data.domain,
output = 'json'
)
endpoint = "https://www.rankranger.com/api/v2/?rank_stats"
logging.info("Endpoint: {}".format(str(endpoint)))
try:
res = requests.get(endpoint, params)
if res.status_code == 200:
json_data = res.json()
if 'result' in json_data:
response = json_data.get('result')
filtered = [{k: v for k, v in d.items() if v != '-' and len(v) != 0} for d in response]
return filtered
except Exception as e:
self.num_api_errors.inc()
logging.error('Exception: {}'.format(e))
logging.error('Extract error on "%s"', 'Rank API')
def format_dates(api):
import datetime as dt
api['date'] = dt.datetime.strptime(api['date'], "%m/%d/%Y").strftime("%Y-%m-%d")
return api
def run(argv=None):
parser = argparse.ArgumentParser()
parser.add_argument('--api_key',
type=str,
help='API key for Rank API.')
parser.add_argument('--date',
type=str,
help='Run date in YYYY-MM-DD format.')
parser.add_argument('--campaign',
type=str,
help='Campaign ID for Rank API')
parser.add_argument('--se',
type=str,
help='Search Engine ID for Rank API')
parser.add_argument('--domain',
type=str,
help='Domain for Rank API')
parser.add_argument('--project',
type=str,
help='Your GCS project.')
parser.add_argument('--dataset',
type=str,
help='BigQuery Dataset to write tables to. Must already exist.')
parser.add_argument('--table_name',
type=str,
help='The BigQuery table name. Should not already exist.')
parser.add_argument('--bucket',
type=str,
help='GCS Bucket name to save the temp + staging folders to')
parser.add_argument('--runner',
type=str,
help='Type of DataFlow runner.')
args, pipeline_args = parser.parse_known_args(argv)
options = PipelineOptions(pipeline_args)
wo = options.view_as(WorkerOptions) # type: WorkerOptions
wo.worker_zone = "europe-west1-b"
google_cloud_options = options.view_as(GoogleCloudOptions)
google_cloud_options.project = args.project
google_cloud_options.staging_location = "gs://{0}/staging".format(args.bucket)
google_cloud_options.temp_location = "gs://{0}/temp".format(args.bucket)
google_cloud_options.region = 'europe-west2'
options.view_as(StandardOptions).runner = args.runner
p = beam.Pipeline(options=options)
api = (
p
| 'create' >> beam.Create(GetAPI(data=args).get_job())
| 'format dates' >> beam.Map(format_dates)
)
BQ = (api | "WriteToBigQuery" >> beam.io.WriteToBigQuery(
table=args.table_name,
dataset=args.dataset,
project=args.project,
schema=SCHEMA,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND))
p.run()
if __name__ == '__main__':
run()
Any help would be hugely appreciated!

How to create a middleware for check role in Nuxtjs

I'm trying to create a middleware for check role of my users.
// middleware/is-admin.js
export default function (context) {
let user = context.store.getters['auth/user']
if ( user.role !== 'admin' ) {
return context.redirect('/errors/403')
}
}
In my .vue file, I'm putting this on:
middleware: [ 'is-admin' ]
It works.
Now, I'd like to check if the user also has another role. So, I create a new middleware:
// middleware/is-consultant.js
export default function (context) {
let user = context.store.getters['auth/user']
if ( user.role !== 'consultant' ) {
return context.redirect('/errors/403')
}
}
And in my .vue file:
middleware: [ 'is-admin', 'is-consultant' ]
Unfortunately, when I do that, if I visit the route with an administrator role, it does not work anymore.
Can you tell me how I can create a middleware that checks multiple roles with Nuxt.js?
Thank you!
The idea is that every page has its authority level. Then in middleware you can compare your current user authority level with the current page authority level, and if it's lower redirect the user. It's very elegant solution that was proposed by Nuxt.js creator. GitHub issue.
<template>
<h1>Only an admin can see this page</h1>
</template>
<script>
export default {
middleware: 'auth',
meta: {
auth: { authority: 2 }
}
}
</script>
Then in your middleware/auth.js:
export default ({ store, route, redirect }) => {
// Check if user is connected first
if (!store.getters['user/user'].isAuthenticated) return redirect('/login')
// Get authorizations for matched routes (with children routes too)
const authorizationLevels = route.meta.map((meta) => {
if (meta.auth && typeof meta.auth.authority !== 'undefined')
return meta.auth.authority
return 0
})
// Get highest authorization level
const highestAuthority = Math.max.apply(null, authorizationLevels)
if (store.getters['user/user'].details.general.authority < highestAuthority) {
return error({
statusCode: 401,
message: 'Du måste vara admin för att besöka denna sidan.'
})
}
}
You can use this feature in Nuxt
export default function ({ $auth, redirect }) {
if (!$auth.hasScope('admin')) {
return redirect('/')
}
}
The scope can be anything you want e.g Consultant, Editor etc.
Check the documentation
Updated
Since you are using Laravel
You can have a role column in your user table
e.g
$table->enum('role', ['subscriber', 'admin', 'editor', 'consultant', 'writer'])->default('subscriber');
Then create a API resource, check the documentation for more
To create a user resource, run this artisan
php artisan make:resource UserResource
Then in your resource, you can have something like this
public function toArray($request)
{
return [
'id' => $this->id,
'name' => $this->name,
'email' => $this->email,
'phone' => $this->phone,
'gender' => $this->gender,
'country' => $this->country,
'avatar' => $this->avatar,
'role' => $this->role,
];
}
Then you can import it to your controller like this
use App\Http\Resources\UserResource;
You can get the resource like this
$userdata = new UserResource(User::find(auth()->user()->id));
return response()->json(array(
'user' => $userdata,
));
In Nuxt
To do authentication in Nuxt
Install nuxt auth and axios
Using YARN : yarn add #nuxtjs/auth #nuxtjs/axios
Or using NPM: npm install #nuxtjs/auth #nuxtjs/axios
Then register them in your nuxtconfig.js
modules: [
'#nuxtjs/axios',
'#nuxtjs/auth',
],
In your nuxtconfig.js, add this also
axios: {
baseURL: 'http://127.0.0.1:8000/api'
},
auth: {
strategies: {
local: {
endpoints: {
login: { url: '/login', method: 'post', propertyName: 'access_token' },
logout: { url: '/logout', method: 'post' },
user: { url: '/user', method: 'get', propertyName: false }
},
tokenRequired: true,
tokenType: 'Bearer',
globalToken: true
// autoFetchUser: true
}
}
}
The URL been the endpoints
Check Documentation for more
To restrict certain pages in Nuxt to Specific User.
> Create a middlweare e.g isadmin.js
Then add this
export default function ({ $auth, redirect }) {
if (!$auth.hasScope('admin')) {
return redirect('/')
}
}
Then go to the Page, add the middleware
export default {
middleware: 'isadmin'

How to access ember model relation properties in routes/controllers?

I'm using ember 2.7.0, and I am trying to set up my ember app with a currentUser.organization derived from the authenticated token. I am able to resolve the currentUser, but I'm unable to resolve the properties of the user's organization in my routes/controllers.
My user model:
import DS from 'ember-data';
export default DS.Model.extend({
name: DS.attr('string'),
email: DS.attr('string'),
organization: DS.belongsTo('organization', { polymorphic: true, async: false } )
});
I've created a service which pulls the user like this:
//app/services/session-account.js
import Ember from 'ember';
import jwtDecode from 'npm:jwt-decode';
const { inject: { service }, RSVP } = Ember;
export default Ember.Service.extend({
session: service('session'),
store: service(),
loadCurrentUser() {
return new RSVP.Promise((resolve, reject) => {
const token = this.get('session.data').authenticated.access_token;
if (!Ember.isEmpty(token)) {
var token_payload = jwtDecode(token);
return this.get('store').findRecord('user', token_payload.user_id, { include: 'organization' }).then((user) => {
this.set('account', user);
this.set('organization', user.organization);
resolve();
}, reject);
} else {
resolve();
}
});
}
});
I'm triggering loadCurrentUser after the user logs in, and I've verified it successfully fetches the user from the back-end (and the organization data is included in the jsonapi response), but while I can inject the service into my controllers/routes and access the user and fetch its direct properties, I can't access any properties of the related organization, via either myservice.get('currentUser.organization.name') (comes back undefined), or myservice.get('currentOrganization.name'), which throws the following error:
Uncaught TypeError: Cannot read property '_relationships' of undefined.
If I load a user as a model and reference properties of the user.organization in a template, everything works fine- but on the javascript side, I can't get to the organization model.
EDIT:
I've subsequently tried the following variant:
return this.get('store').findRecord('user', token_payload.user_id, { include: 'organization' }).then((user) => {
this.set('currentUser', user);
user.get('organization').then((organization) => {
this.set('currentOrganization', organization);
}, reject);
resolve();
}, reject);
and this version (which draws from the ember guides relationship documentation) throws the error TypeError: user.get(...).then is not a function
I'd suggest to try:
1) Replace this.set('organization', user.organization); with this.set('organization', user.get('organization'));
2) Put console.log(user.get('organization')) after this.set('account', user); and look at output

Ember JS: Customizing adapter to include multiple parameters

I currently have a database with 2 objects:
Role
Permission
ONE Role can have MANY permissions. I currently have my Role adapter setup as:
export default DS.RESTAdapter.extend(DataAdapterMixin, {
namespace: 'v1',
host: ENV.APP.API_HOST,
authorizer: 'authorizer:application',
pathForType: function(type) {
return 'staff/roles';
}
});
By default, when a Permission is added to a Role, it generates this request:
Request:
PUT /v1/staff/roles/1
Body:
{
"name": "name_of_role"
"permissions": [
{
"id": "3",
"name": "name_of_permission"
},
...
]
}
I'd like to customize my adapter to produce a request that looks like this instead:
Request:
PUT /v1/staff/roles/1/permissions/3
Body:
<None>
Can someone please tell me how I can go about doing this? Updating the server api to accommodate Ember JS is unfortunately not an option.
UPDATE:
Based on Ryan's response, here's a (I'll call it messy) workaround that did the trick for me.
Open to suggestions for making this more elegant:
export default DS.RESTAdapter.extend(DataAdapterMixin, {
namespace: 'v1',
host: ENV.APP.API_HOST,
authorizer: 'authorizer:application',
pathForType: function(type) {
return 'staff/roles';
},
updateRecord: function(embestore, type, snapshot) {
var roleID = snapshot.id;
var permissionID = snapshot.adapterOptions.permissionID;
var url = ENV.APP.API_HOST + "/v1/staff/roles/" + roleID + "/permissions/" + permissionID;
return new Ember.RSVP.Promise(function(resolve, reject){
Ember.$.ajax({
type: 'PUT',
url: url,
headers: {'Authorization': 'OAUTH_TOKEN'},
dataType: 'json',
}).then(function(data) {
Ember.run(null, resolve, data);
}, function(jqXHR) {
jqXHR.then = null; // tame jQuery's ill mannered promises
Ember.run(null, reject, jqXHR);
});
});
},
});
I can't find it in the Ember documentation but there is a universal ajax method attached to adapter that you can override.
So in my adapter to fit our auth scheme I've done this:
export default DS.RESTAdapter.extend({
host: ENV.host,
ajax: function(url, method, hash){
if(hash){
if(hash.data !== undefined && hash.data !== null){
hash.data.sessionId = this.getSessionId();
}
}else {
hash = {
data: {}
};
hash.data.sessionId = this.getSessionId();
}
return this._super(url, method, hash);
},
getSessionId: function(){
return window.sessionStorage.getItem('sessionId') || {};
}
}
This attaches the sessionId to every ajax call to the server made though out the entire application.
Changing it to modify your url based on the hash arguments passed in shouldn't be an issue.
My version of ember is 2.3.2 but I'm on the latest stable(2.5.2) version of ember-data and this is still working great in case you are worried about the age of that blog post I found.

Categories