Meteor.userId not persisting when i refresh the page - javascript

I'm using Meteor 0.6.3.1 and have improvised my own user system (not really a user system but I thought I might as well make use of the userId variable since nobody else laid claim to it.)
The problem is, the variable isn't persisting.
I have this code
Meteor.methods({
'initCart': function () {
console.log(this.userId);
if(!this.userId) {
var id = Carts.insert({products: []});
this.setUserId(id);
console.log("cart id " + id + " assigned");
}
return this.userId;
}
});
The point being, you should be able to switch pages but still use the same shopping cart.
I can't use Sessions since they're client-side and could lead to information leaking between users..
How should I go about doing this? Is there anything like Amplify for server-side Meteor?

From Meteor docs:
setUserId is not retroactive. It affects the current method call and
any future method calls on the connection. Any previous method calls
on this connection will still see the value of userId that was in
effect when they started.
When you refresh you create a new connection. On that connection you log-in using the cookie stored by the user system on the client side.
You can store the cart id in a cookie...

This works for me:
# server/methods/app.coffee
#---------------------------------------------------
# Info found in Meteor documentation (24 Feb. 2015)
#
# > Currently when a client reconnects to the server
# (such as after temporarily losing its Internet connection),
# it will get a new connection each time. The onConnection callbacks will be called again,
# and the new connection will have a new connection id.
# > In the future, when client reconnection is fully implemented,
# reconnecting from the client will reconnect to the same connection on the server:
# the onConnection callback won't be called for that connection again,
# and the connection will still have the same connection id.
#
# In order to avoid removing data from persistent collections (ex. cartitems) associated with client sessionId (conn.id), at the moment we'll implement the next logic:
#
# 1. Add the client IP (conn.clientAddress) to the cartitems document (also keep the conn.id)
# 2. When a new connection is created, we find the cartitems associated with this conn.clientAddress and we'll update (multi: true) the conn.id on the all cartitems.
# 3. Only remove the cartitems associated with the conn.id after 10 seconds (if in this time the customer reconnect, the conn.id will have been updated at the point #2. we avoid this way removing after refresh the page for example.
# 4. After 10 seconds (ex. the user close the window) we'll remove the cartitems associated with the conn.id that it has been closed.
Meteor.onConnection (conn) ->
CartItems.update({clientAddress: conn.clientAddress}, {$set: {sessionId: conn.id}}, {multi: true})
conn.onClose ->
Meteor.setTimeout ->
CartItems.remove {sessionId: conn.id}
, 10000
Meteor.methods
getSessionData: ->
conn = this.connection
{sessionId: conn.id, clientAddress: conn.clientAddress}

Related

multiple browser instances with websocket capabilities in node?

Let's say I am building a social app. I want to log into multiple accounts (one per browser instance) without an user interface (all via node), and by calling all respective endpoints to log in and start chatting.
The important part is to test when an user closed the tab or logs out or leaves the group and therefore the websocket's connection closes.
If I understand you correctly.
You would like to make a server-side event happen whenever a client connects or disconnects, without any html,css.... or any other user interface.
You can do it like this in node :
For connection you use :
Server.on("connection,function(ws){some stuff...})
The function that is called on connection will get the websocket that connected as parameter by default. I just use a lambda function here you can also call one that will then get the websocket as parameter.
For disconnection you put a function in the Server.on function to monitor when the client disconnected. Like this :
Server.on("connection,function(ws){
ws.onclose = function (ws) {
some stuff...
}
})
You can again replace the lambda function by another one.
Server is in my case equal to this :
const wsserver = require("ws").Server
const server = new wsserver({ port: someport })
But it can vary.
All you need to do apart from that is connect the client.
I do it like this but it can vary as well.
const ws = new WebSocket("ws://localhost:someport");
I hope this is helpful.

How to call a Javascript function from Python

I have a python code (server side) which doesn't interact with client side. However, I need to represent some items when it (server code) will has done. The only idea I came up with is the JS function which represents an item, calling from Python. Could you advise me either packages or another idea to implement this.
Some Details (I do not aware is it necessary, but might be it's helpful)
async def start_delete_delay(app, delay):
"""
The very function which thrust a delay for each front token.
Key arguments:
app -- our application.
delay -- a delay in seconds
"""
async with app['db'].acquire() as conn:
# First of all we need to check for database emptiness
query = text("SELECT True FROM tokens LIMIT(1)")
if await conn.fetch(query):
# If database is not empty then we are processing a waiting delay.
# First, fetching an id & related token from the first position (due to it queue) from database.
query = select([db.tokens.c.id, db.tokens.c.token]).order_by(asc(db.tokens.c.id)).limit(1)
query_result = await conn.fetchrow(query)
# Retrieving an id and token
id_before_sleep, token = query_result['id'], query_result['token']
# Setting a delay
try:
await asyncio.sleep(delay)
# Some information related with cancellation error
# https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.cancel
except asyncio.CancelledError:
pass
# Check whether a token at the first place as same as it was before
finally:
# If it possible but all of members picked their tokens over 60 seconds.
if await conn.fetch(text("SELECT True FROM tokens LIMIT(1)")):
query_result = await conn.fetchrow(query)
id_after_sleep = query_result['id']
# If they are same then we delete that token and starting delay again.
if id_before_sleep == id_after_sleep:
query = delete(db.tokens).where(db.tokens.c.id == id_before_sleep)
# preparing a token for reuse.
app['new_token'].prepare_used_token(token)
# Deleting a token
await conn.fetchrow(query)
# I'd like to call a JS function (which I already have) here
# Starting a delay for adjacent token, over and over and over
task = make_task(start_delete_delay, app, delay)
asyncio.gather(task)
I found two solutions, so if someone faced with such problems try to use them:
First solution
The clue is WebSockets. I used aiohttp and asyncio.
In JavaScript file I added up a listening socket:
var socket = new WebSocket('/link-to-websocket')
In server-side I added a websocket_handler, in my case it sending the message after deleting a toke from database
async def websocket_handler(request):
ws = web.WebSocketResponse()
await ws.prepare(request)
async for msg in ws:
if msg.type == aiohttp.WSMsgType.TEXT:
if app['updt_flag']:
await ws.send_str("signal")
else:
await ws.close()
return ws
And adding it to routes
app.add_routes([web.get('/link-to-websocket', websocket_handler)])
1) How JavaScript works: Deep dive into WebSockets and HTTP/2 with SSE + how to pick the right path
2) Python aiohttp websockets
However this method isn't the best, we don't use entire websocket's functionality therefore let's go ahead to another method: Server-Sent Events (SSE). This method is more suitable for the my problem because we always receiving response from server without request to it (whereas websockets doesn't incorporate such option):
Second solution
As I said above I will use SSE and for these purposes it required a sse-package
pip install aiohttp_sse
import asyncio
from aiohttp_sse import sse_response
async def SSE_request(request):
loop = request.app.loop
async with sse_response(request) as resp:
while True:
if request.app['updt_flag']:
await resp.send("signal")
request.app['updt_flag'] = False
await asyncio.sleep(1, loop=loop)
return resp
Adding route
web.get('/update', SSE_request)
Adding listening sse to JS:
const evtSource = new EventSource("/update");
evtSource.onmessage = function(e) {
display_queue_remove();
}
Thats all:)

How to run Google Cloud SQL only when I need it?

Google Cloud SQL advertises that it's only $0.0150 per hour for the smallest machine type, and I'm being charged for every hour, not just hours that I'm connected. Is this because I'm using a pool? How do I setup my backend so that it queries the cloud db only when needed so I don't get charged for every hour of the day?
const mysql = require('mysql');
const pool = mysql.createPool({
host : process.env.SQL_IP,
user : 'root',
password : process.env.SQL_PASS,
database : 'mydb',
ssl : {
[redacted]
}
});
function query(queryStatement, cB){
pool.getConnection(function(err, connection) {
// Use the connection
connection.query(queryStatement, function (error, results, fields) {
// And done with the connection.
connection.destroy();
// Callback
cB(error,results,fields);
});
});
}
This is not so much about the pool as it is about the nature of Cloud SQL. Unlike App Engine, Cloud SQL instances are always up. I learned this the hard way one Saturday morning when I'd been away from the project for a week. :)
There's no way to spin them down when they're not being used, unless you explicitly go stop the service.
There's no way to schedule a service stop, at least within the GCP SDK. You could alway write a cron job, or something like that, that runs a little gcloud sql instances patch [INSTANCE_NAME] --activation-policy NEVER command at, for example, 6pm local time, M-F. I was too lazy to do that, so I just set a calendar reminder for myself to shut down my instance at the end of my workday.
Here's the MySQL Instance start/stop/restart page for the current SDK's docs:
https://cloud.google.com/sql/docs/mysql/start-stop-restart-instance
On an additional note, there is an ongoing 'Feature Request' in the GCP Platform to start/stop the Cloud SQL (2nd Gen), according to the traffic as well. You can also visit the link and provide your valuable suggestions/comments there as well.
I took the idea from #ingernet and created a cloud function which starts/stops the CloudSQL instance when needed. It can be triggered via a scheduled job so you can define when the instance goes up or down.
The details are here in this Github Gist (inspiration taken from here). Disclaimer: I'm not a python developer so there might be issues in the code, but at the end it works.
Basically you need to follow these steps:
Create a pub/sub topic which will be used to trigger the cloud function.
Create the cloud function and copy in the code below.
Make sure to set the correct project ID in line 8.
Set the trigger to Pub/Sub and choose the topic created in step 1.
Create a cloud scheduler job to trigger the cloud function on a regular basis.
Choose the frequency when you want the cloud function to be triggered.
Set the target to Pub/Sub and define the topic created in step 1.
The payload should be set to start [CloudSQL instance name] or stop [CloudSQL instance name] to start or stop the specified instance (e.g. start my_cloudsql_instance will start the CloudSQL instance with the name my_cloudsql_instance)
Main.py:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import base64
from pprint import pprint
credentials = GoogleCredentials.get_application_default()
service = discovery.build('sqladmin', 'v1beta4', credentials=credentials, cache_discovery=False)
project = 'INSERT PROJECT_ID HERE'
def start_stop(event, context):
print(event)
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print(pubsub_message)
command, instance_name = pubsub_message.split(' ', 1)
if command == 'start':
start(instance_name)
elif command == 'stop':
stop(instance_name)
else:
print("unknown command " + command)
def start(instance_name):
print("starting " + instance_name)
patch(instance_name, "ALWAYS")
def stop(instance_name):
print("stopping " + instance_name)
patch(instance_name, "NEVER")
def patch(instance, activation_policy):
request = service.instances().get(project=project, instance=instance)
response = request.execute()
dbinstancebody = {
"settings": {
"settingsVersion": response["settings"]["settingsVersion"],
"activationPolicy": activation_policy
}
}
request = service.instances().patch(
project=project,
instance=instance,
body=dbinstancebody)
response = request.execute()
pprint(response)
Requirements.txt
google-api-python-client==1.10.0
google-auth-httplib2==0.0.4
google-auth==1.19.2
oauth2client==4.1.3

Implementing notification/alert popup on job completion in Ruby on Rails

I am implementing background processing jobs in Rails using 'Sidekiq' gem which are run when a user clicks on a button. Since the jobs are run asynchronously, rails replies back instantly.
I want to add in a functionality or a callback to trigger a JavaScript popup to display that the job processing has finished.
Controller Snippet:
def exec_job
JobWorker.perform_async(#job_id,#user_id)
respond_to do |wants|
wants.html { }
wants.js { render 'summary.js.haml' }
end
end
Edit 1:
I am storing the 'user_id' to keep a track of the user who triggered the job. So that I can relate the popup to this user.
Edit 2:
The 'perform' method of Sidekiq does some database manipulation(Update mostly) and log creation, which takes time.
Currently, the user gets to know about the status when he refreshes the page later.
Edit 3(Solution Attempt 1):
I tried implementing 'Faye Push Notification' by using subscribe after successful login of user(with channel as user_id).
On the server side, when the job completes execution, I created another client to publish a message to the same channel (Using faye reference documents).
It is working fine on my desktop, i.e., I can see an alert popup prompting that the job has completed. But when I try testing using another machine on my local network, the alert is not prompted.
Client Side Script:
(function() {
var faye = new Faye.Client('http://Server_IP:9292/faye');
var public_subscription = faye.subscribe("/users/#{current_user.id}", function(data) {
alert(data);
});
});
Server Side Code:
EM.run {
client = Faye::Client.new('http://localhost:9292/faye')
publication = client.publish("/users/#{user.id}", 'Execution completed!')
publication.callback do
logger.info("Message sent to channel '/users/#{user.id}'")
end
publication.errback do |error|
logger.info('There was a problem: ' + error.message)
end
}
Rails 4 introduced the ActionController::Live module. This allows for pushing SSEs to the browser. If you want to implement this based on a database update you will need to look into configuring Postgresql to LISTEN and NOTIFY.
class MyController < ActionController::Base
include ActionController::Live
def stream
response.headers['Content-Type'] = 'text/event-stream'
100.times {
response.stream.write "hello world\n"
sleep 1
}
ensure
response.stream.close
end
end
Here is a good article on it: http://ngauthier.com/2013/02/rails-4-sse-notify-listen.html
Thanks fmendez.
I looked at the suggestions given by other users and finally, I have implemented a 'Faye' based push notification by:
Subscribing the user on successful login to a channel created using 'user_id'
Replying to this channel from server side after job completion (By fetching user_id)
For better understanding(so that it may be helpful for others), check the edit to the question.

Meteor client disconnected event on server

Simple question, maybe simple answer: how do I know on the server that a certain client has disconnected? Basic use case: the serve would need to know if a player has dropped the connection.
In the publish function, you can watch socket close event as follows.
this.session.socket.on "close", -> # do your thing
Meteor.publish("yourPublishFunction", function()
{
var id = this._session.userId;
this._session.socket.on("close", Meteor.bindEnvironment(function()
{
console.log(id); // called once the user disconnects
}, function(e){console.log(e)}));
return YourCollection.find({});
});
I've created a pretty comprehensive package to keep track of all logged-in sessions from every user, as well as their IP addresses and activity:
https://github.com/mizzao/meteor-user-status
To watch for disconnects, you can just do the following, which catches both logouts and browser closes:
UserStatus.on "sessionLogout", (userId, sessionId) ->
console.log(userId + " with session " + sessionId + " logged out")
You can also check out the code and do something similar for yourself.
Maybe (in the server code)
Meteor.default_server.sessions.length
or
Meteor.default_server.stream_server.open_sockets.length
you can do one thing make a event on server and call it from browser with ajax which call after some small time interval settimeinterval using with session values into header and if server din`t get request from user it means he dropped connection

Categories