BigQuery similarity to "signed urls" - javascript

I have the following use case in BigQuery:
A non-trusted user will be querying a BigQuery table. Let's say the query is SELECT * FROM [bigquery.table123].
The query will return a large amount of data, let's say 200MB, which will then be displayed in the user's browser.
Our goal is to provide the most efficient way to get the 200MB data into the user's browser (and the worst way seems to do two trips instead of one -- from BQ to our server and then (compressed) to the client). I think the solution for this would probably be to enable the end (non-trusted) user to get something like a "signed-url" to perform the query directly from their browser to BigQuery. The flow would then be like this:
User issues query to our backend.
Authentication is done and a signed url is generated and passed back into javascript.
The client then sends the signed url and the data is loaded directly into the browser.
Only that exact query that has been authorized may be performed, and no other queries could be done (for example, if the client copied any tokens from the javascript)
I would never, ever want the end user to know the ProjectId or Table Name(s) that they are querying.
Is something like this possible to do in BigQuery? Here is an example of a similar need in Cloud Storage. Here is an example of an authenticated/trusted user doing this in browser: https://github.com/googleapis/nodejs-bigquery/blob/master/samples/browseRows.js or . https://stackoverflow.com/a/11509425/651174, but is there a way to do this in-browser for a non-trusted user?

Below is an option that involves two levels of authorized views. This allows to shield not only underlying data from end user - but also hides what exactly data is being used
Let's assume data is in DatasetA. Below steps explain the logic
Create InternalView in DatasetB - this one will target real data from DatasetA.
Make InternalView as Authorized View for DatasetA
Create PublicView in DatasetC - this one will target InternalView
Make PublicView as Authorized View for DatasetB
Give users read access to DatasetC
Users will be ale to run PublicView which will actually be running PrivateView against readl data.
Meantime, users will not be able to see the definition of PrivateView thus will never know ProjectId or Table Name(s) that they are querying, etc.
Note: this does not address how we'd prevent users from being able to issue queries that we haven't pre-authorized? part of your question but I am adding my answer as you asked me to do
Meantime - at least theoretically - you can embed some logic into your PrivateView, which will be querying some internal metatable with info which user and when allowed to get result. Assuming that such meta-table will be managed by your backend based on authentication/token or whatever else you have in mind
Below is simplified and brief outline of that approach
#standardSQL
WITH `projectA.datasetA.table` AS (
SELECT 'data1' col UNION ALL
SELECT 'data2' UNION ALL
SELECT 'data3'
), `projectA.datasetA.applicationPermissions` AS (
SELECT 'user1#gmail.com' user UNION ALL
SELECT 'user2#gmail.com'
), `projectA.datasetB.privateView` AS (
SELECT d.*
FROM `projectA.datasetA.table` d
CROSS JOIN `projectA.datasetA.applicationPermissions` p
WHERE LOWER(user) = LOWER(SESSION_USER())
), `projectA.datasetC.publicView` AS (
SELECT *
FROM `projectA.datasetB.privateView`
)
SELECT *
FROM `projectA.datasetC.publicView`
If user1#gmail.com or user2#gmail.com will run below query
SELECT *
FROM `projectA.datasetC.publicView`
they will get below result
Row col
1 data1
2 data2
3 data3
while if user3#gmail.com will run same very query - result will be
Row col
Query returned zero records.
Obviously, you can extend your meta-table (applicationPermissions) with for example timeframe during which user will be allowed to get result (respective lines to check time conditions will need to be added to projectA.datasetB.privateView )

Related

MS Dynamics CRM - how to test for presence of Notes records within Accounts

I need to check (OnLoad) whether an Account has Notes added to it. I've tried doing this using workflows but that only tests after a Note is added rather than if they are already there i.e. On Change, On updating, On status change etc.
Also tried accessing Notes via JavaScript but seems to not be able to reach the attribute. Will paste JScript here if this will help, though that might be the wrong way to go anyway.
function NotesAlert() {
// the 'notestext' field is the Notes description area
var NotesDesc = Xrm.Page.getAttribute("notestext").getValue();
if(NotesDesc !=null) {
Xrm.Page.getAttribute("dt_legacyuserurn").setValue("Notes exist for
this Acc");
}
}
Whether it is client side (JavaScript) or server side (C# plugins, workflows), you have to query the associated notes (annotations) for that particular account record & do the validation.
If you want to validate it in JS, use web api to get the associated notes of account from onLoad & onSave Form events, even onChange of fields can be done like below.
var query = "/api/data/v8.2/annotations?$filter=_objectid_value eq" + accountId;
If you want to validate in Plugins/Workflows (custom), use fetchxml or query expression to query the associated notes of account from Create, Update messages.

Automatically assign a customer to a specific customer group on sign-up - Bigcommerce

I've been told by BC support that this isn't possible, but I would be surprised if there really wasn't a way.
I need to be able to automatically assign a customer to a specific customer group when they create an account. My thought:
I would add an extra field to the sign-up form
Provide a user with a code (a string or number)
User enters code when creating new account
User hits submit
On form submit I would grab the value of the extra field:
var codeInput = document.getElementById('code-input').value;
I would then compare that value to a pre-defined string, and if there is a match, I would assign that customer to groupX (with a group id of 8):
if ( codeInput === "codeIGaveToTheUser" ) {
currentUserGroupID = 8;
}
Is it possible to assign a customer to a specific group on sign-up like this (or any other way)?
Any help is much appreciated.
Although using BigCommerce webhooks would ensure the highest success rate of executing your customer group assignment app, it requires quite a bit of setup on BigCommerce (creating a draft app, getting an oAuth key, jumping jacks, etc), and may be a bit of overkill for your requirements.
Here's an easier way, in my {mostly} humble opinion, that takes advantage of much of what you included in your original question. Any solution though will nonetheless require an external server to handle the customer group assignment through the BigCommerce API.
Within the BigCommerce control panel, add in the extra field to the user sign up form like you mentioned.
So as you can see, this new input field has been added natively to the default registration page:
So now, when a user creates an account on your site, the value for the Signup Code (the custom field created) will be directly accessible through the API for that customer's account. Take a look at what that JSON data looks like:
Okay, so this is nice and all, but how do we automate it?
To do so, we will have to let our external application know that a customer just registered. Furthermore, our external application will need some sort of reference to this newly created customer, so that it knows which customer to update the customer group for. Normally a BigCommerce webhook would notify us of all this, but since we aren't using a BigCommerce webhook, here's the alternative method to triggering the external script.
We will trigger our external application via the BigCommerce Registration Confirmation page - createaccount_thanks.html. This page is loaded immediately after a customer creates an account, so it is the perfect place to insert our trigger script.
Additionally, now that the customer is logged in, we can access the customer's email address via a BigCommerce Global system variable -%%GLOBAL_CurrentCustomerEmail%%.
We should make an HTTP request from this page to our external application along with the customer's email address. Specifically, we can make an XMLHttpRequest via JavaScript, or to be modern, we'll use Ajax via jQuery. This script should be inserted before the closing </body> tag on createaccount_thanks.html.
Example of POST request (although a GET would suffice as well):
<script>
$(function() {
$('.TitleHeading').text('One moment, we are finalizing your account. Please wait.').next().hide(); // Let the customer know they should wait a second before leaving this page.
//** Configure and Execute the HTTP POST Request! **//
$.ajax({
url: 'the_url_to_your_script.com/script.php',
type: 'POST',
contentType: 'application/json',
data: JSON.stringify({email:"%%GLOBAL_CurrentCustomerEmail%%"}),
success: function() {
// If the customer group assignment goes well, display page and proceed normally. This callback is only called if your script returns a 200 status code.
$('.TitleHeading').text('%%LNG_CreateAccountThanks%%').next().show();
},
error: function() {
// If the customer group assignment failed, you might want to tell your customer to contact you. This callback is called if your script returns any status except 200.
$('.TitleHeading').text('There was a problem creating your account').after('Please contact us at +1-123-456-7890 so that we can look into the matter. Please feel free to continue shopping in the meantime.');
}
});
});
</script>
Now finally, you just need to create your serverside application responsible for handling the request above, and updating the customer's customer group. You can use any language that you desire, and BigCommerce even offers several SDK's you can use to save mega development time. Just remember that you need to host it somewhere online, and then insert its URL to the JS script above.
PHP Example (quick & dirty):
git clone https://github.com/bigcommerce/bigcommerce-api-php.git
curl -sS https://getcomposer.org/installer | php && php composer.phar install
<?php
/**
* StackOverflow/BigCommerce :: Set Customer Group Example
* http://stackoverflow.com/questions/37201106/
*
* Automatically assigning a customer group.
*/
//--------------MAIN------------------------//
// Load Dependencies:
require ('bigcommerce-api-php/vendor/autoload.php');
use Bigcommerce\Api\Client as bc;
// Define BigCommerce API Credentials:
define('BC_PATH', 'https://store-abc123.mybigcommerce.com');
define('BC_USER', 'user');
define('BC_PASS', 'token');
// Load & Parse the Email From the Request Body;
$email = json_decode(file_get_contents('php://input'))->email;
// Execute Script if API Connection Good & Email Set:
if ($email && setConnection()) {
$customer = bc::getCollection('/customers?email=' .$email)[0]; //Load customer by email
$cgid = determineCustomerGroup($customer->form_fields[0]->value); //Determine the relevant customer group ID, via your own set string comparisons.
bc::updateCustomer($customer->id, array('customer_group_id' => $cgid)) ? http_send_status(200) : http_send_status(500); //Update the customer group.
} else {
http_send_status(500);
exit;
}
//-------------------------------------------------//
/**
* Sets & tests the API connection.
* #return bool true if the connection successful.
*/
function setConnection() {
try {
bc::configure(array(
'store_url' => BC_PATH,
'username' => BC_USER,
'api_key' => BC_PASS
));
} catch (Exception $e) {
return false;
}
return bc::getResource('/time') ? true : false; //Test Connection
}
/**
* Hard define the customer group & signup code associations here.
* #param string The code user used at signup.
* #return int The associated customergroup ID.
*/
function determineCustomerGroup($signupCode) {
switch ($signupCode) {
case 'test123':
return 1;
case 'codeIGaveToTheUser':
return 8;
default:
return 0;
}
}
So then you would do your customer group string comparisons directly in the serverside program. I'd recommend you rewrite your own BC API script as the one above in quality is really something along the lines of functional pseudo-code, but more so present to show the general idea. HTH
You would need to set up a server to listen for webhooks unless you wanted to do a cron job. We have some basic information on the developer portal, but I included more resources below. From there, you'd need to choose your server language of choice to listen for the webhooks once they been created, respond correctly (200 response if received), execute code based on this information, and then take action against the BC API.
So if you were looking for a code, you'd need to listen for the store/customer/created webhook, and have your code look for a custom field that contained the code. If it was present, then take action. Else, do nothing.
https://developer.github.com/webhooks/configuring/
http://coconut.co/how-to-create-webhooks
How do I receive Github Webhooks in Python

Use JS to execute MySQL queries and the security issues it involves

I've been searching around the internet for a way to define a query in JavaScript, pass that query to PHP. Let PHP set up a MySQL connection, execute the query and return the results json encoded.
However my concern is with the security of this method since users could tamper with the queries and do things you don't want them to do or request data you do not want them to see.
Question
In an application/plugin like this, what kind of security measures would you suggest to prevent users from requesting information I don't want them to?
Edit
The end result of my plugin will be something like
var data = Querier({
table: "mytable",
columns: {"column1", "column2", "column3"},
where: "column2='blablabla'",
limit: "10"
});
I'm going to let that function make an AJAX request and execute a query in PHP using the above data. I would like to know what security risks this throws up and how to prevent them.
It's unclear from your question whether you're allowing users to type queries that will be run against your database, or if your code running in the browser is doing it (e.g., not the user).
If it's the user: You'd have to really trust them, since they can (and probably will) destroy your database.
If it's your code running in the browser that's creating them: Don't do that. Instead, have client-side code send data to the server, and formulate the queries on the server using full precautions to prevent SQL Injection (parameterized queries, etc.).
Re your update:
I can see at least a couple issues:
Here's a risk right here:
where: "column2='blablabla'"
Now, suppose I decide to get my hands on that before it gets sent to the server and change it to:
where: "column2=');DROP TABLE Stuff; --"
You can't send a complete WHERE clause to the server, because you can't trust it. This is the point of parameterized queries:
Instead, specify the columns by name and on the PHP side, be sure you're doing correct handling of parameter values (more here).
var data = Querier({
table: "mytable",
columns: {"column1", "column2", "column3"},
where: {
column2: {
op: '=',
value: 'blablabla'
}
}
limit: "10"
});
Now you can build your query without blindly trusting the text from the client; you'll need to do thorough validation of column names, operators, etc.
Exposing information about your scheme to the entire world is giving up information for free. Security is an onion, and one of the outer layers of that onion is obscurity. It's not remotely sufficient unto itself, but it's a starting point. So don't let your client code (and therefore anyone reading it) know what your table names and column names are. Consider using server-side name mapping, etc.
Depending on how you intend to do, you might have a hole bigger than the one made in this economy or no hole at all.
If you are going to write the query on client-side, and send to php, I would create a user with only select, insert, delete and update, without permissions to access any other database.
Ignore this if you use SQlite.
I advise against this!
If you build the query on server-side, just stuff to the server the data you want!
I would change the code into something like this:
var link = QuerierLink('sql.php');//filename to use for the query
var data = Querier('users',link);//locks access to only this table
data.select({
columns: ['id','name','email'],
where: [
{id:{'>':5}},
{name:{'like':'%david%'}}
],
limit:10
});
Which, on server-side, would generate the query:
select `id`,`name`,`email` from `db.users` where `id`>5 and `name` like '%david%' limit 10
This would be a lot better to use.
With prepared statements, you use:
select `id`,`name`,`email` from `db.users` where `id`>:id and `name` like :name limit 10
Passing to PDO, pseudo-code:
$query='select `id`,`name`,`email` from `'.$database_name.'.users` where `id`>:id and `name` like :name limit 10';
$result=$PDO->exec($query,array(
'id'=>5,
'name'=>'%david%'
)
);
This is the prefered way, since you have more control over what is passed.
Also, set the exact database name along the name of the table, so you avoid users accessing stuff from other tables/databases.
Other databases include information_schema, which has every single piece of information from your entire databasem, including user list and restrictions.
Ignore this for SQlite.
If you are going to use MySQL/MariaDB/other you should disable all read/write permissions.
You really don't want anyone writting files into your server! Specially into any location they wish.
The risk: They have a new puppy for the attackers to do what they wish! This is a massive hole.
Solution: Disable FILE privileges or limit the access to a directory where you block external access using .htaccess, using the argument --secure_file_priv or the system variable ##secure_file_priv.
If you use SQlite, just create a .sqlite(3) file, based on a template file, for each client connecting. Then you delete the file when the user closes the connection or scrap every n minutes for files older than x time.
The risk: Filling your disk with .sqlite files.
Solution: Clear the files sooner or use a ramdisk with a cron job.
I've wanted to implement something like this a long ago and this was a good way to exercice my mind.
Maybe I'll implement it like this!
Introducing easy JavaScript data access
So you want to rapidly prototype a really cool Web 2.0 JavaScript application, but you don't want to spend all your time writing the wiring code to get to the database? Traditionally, to get data all the way from the database to the front end, you need to write a class for each table in the database with all the create, read, update, and delete (CRUD) methods. Then you need to put some marshalling code atop that to provide an access layer to the front end. Then you put JavaScript libraries on top of that to access the back end. What a pain!
This article presents an alternative method in which you use a single database class to wrap multiple database tables. A single driver script connects the front end to the back end, and another wrapper class on the front end gives you access to all the tables you need.
Example/Usage
// Sample functions to update authors
function updateAuthorsTable() {
dbw.getAll( function(data) {
$('#authors').html('<table id="authors"><tr><td>ID</td><td>Author</td></tr></table>');
$(data).each( function( ind, author ) {
$('#authors tr:last').after('<tr><td>'+author.id+'</td><td>'+author.name+'</td></tr>');
});
});
}
$(document).ready(function() {
dbw = new DbWrapper();
dbw.table = 'authors';
updateAuthorsTable();
$('#addbutton').click( function() {
dbw.insertObject( { name: $('#authorname').val() },
function(data) {
updateAuthorsTable();
});
});
});
I think this is exactly what you're looking for. This way you won't have to build it yourself.
The more important thing is to be careful about the rights you grant to your MySQL user for this kind of operations.
For instance, you don't want them to DROP a database, nor executing such request:
LOAD DATA LOCAL INFILE '/etc/passwd' INTO TABLE test FIELDS TERMINATED BY '\n';
You have to limit the operations enabled to this MySQL user, and the tables he has accessed.
Access to total database:
grant select on database_name.*
to 'user_name'#'localhost' identified by 'password';
Access to a table:
grant select on database_name.table_name
to 'user_name'#'localhost' identified by 'password';
Then... what else... This should avoid unwanted SQL injection for updating/modifying tables or accessing other tables/databases, at least, as long as SELECT to a specific table/database is the only privillege you grant to this user.
But it won't avoid an user to launch a silly bad-performance request which might require all your CPU.
var data = Querier({
table: "mytable, mytable9, mytable11, mytable12",
columns: {"mytable.column1", "count(distinct mytable11.column2)",
"SUM(mytable9.column3)"},
where: "column8 IN(SELECT column7 FROM mytable2
WHERE column4 IN(SELECT column5 FROM mytable3)) ",
limit: "500000"
});
You have to make some check on the data passed if you don't want your MySQL server possibly down.

Multi-Page Order Form with sessions

For my web dev class we have to create a login page, verify it against encrypted records (Id, password) that we have to enter, then step through an order form (while being able to step forward and backward throughout).. so sessions and all that.. I have no idea where to even start aside from coding the html which I've already done.. Any pushes in the right direction would be helpful.. my instructor is abrasive and refuses to help most people without degrading them first.
This is kinda like a longer question.
First at login form you need to check with MYSQL / SQL / DB / etc if the username and password matches.
It's basically like this:
SELECT * from users WHERE username = 'username' AND pass = 'sha1(password)'
Or use the encryption method which you use (md5,sha1,any other for password)
Then you check out if it's returning a row. IF it's return 1 row,then everything is correct.
Then you put all this data to session. I don't know how much you need,but you can put the whole sql result to data. IT doesn't matter here as you said it's a dev class work.
So basically at every of your php you have to start with
session_start();
Then when you verified the user you put the sql result into SESSION like this:
$_SESSION['userdata'] = $sql_row_array;
With this data you can read the current loggedin user's informations. So it's like:
Get username: $_SESSION['userdata']['username']
So you can use this to identify whom bought / ordered the products and insert it into the database.

How to get a list of all my posts from the Facebook group

I can get a single post data by GET /{post-id}, but how to get list of all my posts on a single group by group-id?
You can use fql and query on the stream table. You can use the following query-
SELECT post_id, message FROM stream WHERE source_id='{group-id}' and actor_id=me() LIMIT 1000
Demo
Permissions required: user_groups
Here you can find the list of fields/columns you can fetch.
Using the Javascript SDK:
Since you are using javascript SDK, you can use the following code:
FB.api("/fql?q={above-query}", callback() { … } );
Understanding the number of results by FQL:
If you query stream without a LIMIT, you will get up to the last 50 posts or the last 30 days worth of items, whichever is fewer.
If you give the LIMIT, Facebook executes your FQL and returns all posts that match your query.
Then Facebook filters out the posts that are not visible to your app. This is based on the actor's privacy settings. There is no visible_to_me field in the stream table that would allow you to pre-filter your results.
This blog explains the same.

Categories