Tailing Logfile in Ruby On Rails 3.1 - javascript

i have some scripts which i have to execute from my ruby on rails application. To ensure that scripts do what they should, my application must show/tail the content of logfiles which where generated from the scripts.
In more detail: i have expect scripts which configure some blackbox devices over a seriell connection (some sort of a rollout mechanism). So i have to watch, for example, a update process or a reboot of the connected device (to verify that everything is okay). This is what i write to my logfiles.
Therefore i need to:
execute a process and handle the exit code
tail a/some logfiles (maybe Javascript or html5?)
How could i do that? Examples will be really appreciate!
Thanks a lot!

The answer to #1 is pretty easy, the system call, i.e.
ret = system('ls','-l')
ret will be true if the command had a zero exit status. $? will contain a Process::Status object from which you can obtain the exit status
unless system('ls','-l','/a_bogus_dir')
logger.debug("ls failed with #{$?.exitstatus}")
end
The answer for #2 can be done several ways. You could create a controller action that simply grabbed the contents of a specific file in the filesystem, and returned the contents.
def get_file_contents
File.open(params[:file_to_read],"r") { |f| #contents = f.read }
respond_to |format|
format.js
end
end
Then create the file get_file_contents.js.erb:
$('#display_div').html('<%= escape_javascript(#contents) %>');
Then you'd have to create a timer of some kind on your page to repeatedly call that controller action, I use jquery.timers. In a timer loop you would call
$.get('/get_file_contents?file_to_read=public/logfile');
That will hit the controller, grab the file contents, and execute get_file_contents.js.erb, which would update the div with the current contents of the file.
You'd have to add the route /get_file_contents to routes.rb.

Related

How do I get my Python file to print in the terminal instead of in the server it sets up?

I am running a bit of a complicated setup where several files interact with each other:
a python file, setting up a server, which is used to connect two users online
a combination of js and html files to set up the web page that each user interacts with
So each user interacts with the js files, which in turn send a message to the python file, which reacts by sending the appropriate response to the js files on the other user’s end, etc.
To launch all this, I simply open the python file in my terminal — thus opening up the websockets — and then I type in the address of my html file on my browser. I know the functions in my python file are executing correctly because the interaction does work on my browser, however, none of the prints in the functions show up on my terminal…
So for example, in the python file:
def message_received(client, server, message):
print("Client(%d) said: %s" % (client['id'], message))
response = json.loads(message)
response_code = response['response_type']
handle_client_response(client['id'], response_code, response)# another function defined elsewhere
PORT = 9004
print('starting up')
server = WebsocketServer(PORT, '0.0.0.0')# this is calling the actual server set up from another file, which I didn 't write myself.
server.set_fn_message_received(message_received)
server.run_forever()
The "starting up" is the only thing that will actually print, the print in message_received doesn't show up, even though I know for a fact the function is working because handle_client_response is called correctly.
My guess is this is because the function are not actually executed on the terminal, but on the server that I set up, so python is trying to print in the server instead of the terminal. But also I have no idea what I’m talking about — first time I ever do this type of complicated files interaction so I’m very confused!
Am I guessing the problem correctly? Any fix for it?
Try maybe using a library like logging to handle the print of messages. That way, you can specify where the log messages should be written to.
import logging
logging.basicConfig(level=logging.DEBUG)
def message_received(client, server, message):
logging.debug("Client(%d) said: %s" % (client['id'], message))
response = json.loads(message)
response_code = response['response_type']
handle_client_response(client['id'], response_code, response)
Hopefully this will fix it.

How do you create multiple channels with Actioncable; how does one pass an in-document variable to the javascript and ruby channels and jobs?

For example, in https://www.youtube.com/watch?v=n0WUjGkDFS0 at 10:36 he mentions the ability to create multiple channels, but how would one actually accomplish this?
According to Rails 5 ActionCable establish stream from URL parameters a variable can be defined and passed as a parameter like:
def subscribed
stream_from "room_channel_#{params[:roomId]}"
end
But in the javascript file prior to passing the data here, how does one pass in the data from the page? The following example renders an error as presumably the cable is defined before the document is loaded.
App.room = App.cable.subscriptions.create { channel: "RoomChannel", roomId: document.getElementById("message_text").getAttribute("data-room")}
Then, if one does successful get the data from the document into the variable here and passes it to the stream_from method, then lastly, how does the right channel get passed into the perform method to be used in the broadcast job?
def perform(message)
ActionCable.server.broadcast 'room_channel_???', message: render_message(message) #, roomId: roomId
end
Thanks!
I learned a lot by looking to the ActionCable example. I too was confused by the docs that suggest parsing parameters and start streaming immediately on subscription. While this is an option, you might prefer the approach below.
Create a special method that can be called from the client (JS) side, something like start_listening:
class RoomChannel < ApplicationCable::Channel
# Called when the consumer has successfully
# become a subscriber of this channel.
def subscribed
end
def start_listening room_data
stop_all_streams # optional, you might also keep listening...
stream_for Room.find(room_data['room_id'])
end
def stop_listening
stop_all_streams
end
end
With this code (and restart of the server) you can now call the following line when you actually have loaded the room:
App.roomChannel.perform("start_listening", {room_id: 20});
Now you can stream data for the room anywhere using broadcast_to. E.g. from a RoomMessage after_safe-action:
RoomChannel.broadcast_to(room, room_message)
This will broadcast the message to all who're listening.
By separating the moment you start to listen to a stream from actually opening the connection makes it easier to set up multiple data streams (There is one connection that can have many channels that can have many streams) (just don't close the old streams when starting a new one ;) ). Connection setup time is also be a bit quicker, although it typically comes at a price of having an opened connection maybe already as soon as a user signs in, something you could easily work around by subscribing just before you start listening.
I came up with 2 solutions to this problem.
This one is kinda dumb - you can just parse URL. You always want the part after last "/", so thanks to REST this is viable option.
Better one - you can wrap all your client subscription code into function that is called on document load. This way you have all data from the page available to use for creating a new subscription.
Hope you'll reply if you figure out a cleaner solution.

rspec testing: putting :js => true also affects the before block

I have a page that is for booking an appointment and it has some javascript code that selects the earliest time and day when the appointment is available, after which user can click on the button to schedule the appointment.
So in order to test that, I was writing some rspec test like the following,
book_appointment_spec.rb
context "when on the profile page" do
before do
click_linkedin_button
end
it 'book an appointment', :js => true do
click_link "new-appointment"
choose "doctor1"
click_button "Submit and Schedule"
expect(page).to have_content "Congrats!"
end
end
click_linkedin_button is a method definition that just logins a user via linkedin oauth. The problem is that even after setting OmniAuth.config.mock_auth[:linkedin], whenever I set :js => true around it block, it asks me to login via linkedin, http://imgur.com/mYUOxgD
I was wondering if anyone knows how to fix this problem.
Following are other files that might be relevant to this problem.
spec_helper.rb
require 'capybara/webkit/matchers'
Capybara.javascript_driver = :webkit
Gemfile
gem 'capybara-webkit'
gem 'database_cleaner'
gem 'capybara'
As you've discoverd, you can't run part of a test using one driver and part of the test using another. This would be equivalent to saying
Given I login in using Safari.
Then I should be logged in using Firefox.
So your solution is that you have to run your js login code in the test environment to login. This is actually a good thing (you want to test your js login code). If you want to avoid actually connecting to linkedin, every time you run this test, then you need to mock the connection to linkedin. Have a look at VCR (https://github.com/vcr/vcr), this will allow you to record your connection to linkedin, so that in subsequent test runs you don't have to goto linkedin.
Setting js: true in your rspec block is a shortcut to use the javascript-enabled driver for the whole example. So this driver will be available and used during the whole execution or the example, which includes all before/after/around blocks.
To work around this, instead of using js: true, you can manually set which driver to use at the point(s) of your example where you need to.
it {
do_some_stuff_with_default_driver
Capybara.current_driver = :webkit
do_some_stuff_with_webkit
Capybara.current_driver = :selenium
do_some_stuff_with_selenium
}
EDIT
Oops I just read this, so perhaps that solution will not be working. please let me know
Note: switching the driver creates a new session, so you may not be able to switch in the middle of a test.
I solved it by creating an actual linkedin account, put that auth info in the .env and just called fill_in method to fill out email and password field during its callback. And stuck with running a js driver throughout the entire context block.
EDIT: This is of course not the best answer so I am still accepting other answers.

How to dynamically update other pages' content

New to programming here. I'm using Rails to create a web app that does reviews, but am having a little trouble figuring out where to start on this one particular part. I'd appreciate any and all help:
Let's say that on my homepage I want to have a top 10 list of restaurants. Beside each place there would be a score. If you were to click on the link to that restaurant, it would bring you to that restaurant's detail page where you can rate a number of different qualities. As users rate the place the score will update. How can I get that score and ranking to be reflected on my main homepage based on how users rate each place? Thinking this might have to be done with some Javascript (or is there a way to do this in Rails?). Thanks!
The pure answer to your question is you need data persistence - a place to centrally store data & render it in the view for the user.
It's funny why you should ask this question in the Ruby on Rails section - this is exactly what this framework is for, and I would question your competency if you didn't consider this
--
Database
Rails uses a central database to store your data. It uses the MVC programming pattern to give you the ability to access that data wherever you require; allowing you to manipulate it as per your requirements:
Without detailing how to make your app from scratch, I'll give you the basic principle you should use:
#config/routes.rb
root "restaurants#index"
resources :restaurants
#app/controllers/restaurants_controller.rb
Class RestaurantsController < ApplicationController
def index
#restaurants = Restaurant.all
end
end
#app/models/restaurant.rb
Class Restaurant < ActiveRecord::Base
has_many :reviews
end
#app/models/review.rb
Class Review < ActiveRecord::Base
belongs_to :restaurant
end
#app/views/restaurants/index.html.erb
<% for restaurant in #restaurants do %>
<%= restaurant.reviews.count %>
<% end %>
--
Recommended Reading
You'll be best reading the Rails beginner guide on how to get this working properly. It's basically what Rails is for ;)
Only javascript is not enough.
When website is loaded in user's browser, all information from database are loaded once, like html files. Even if you'll update your restaurant object using ajax or simple redirect_to :back method, there will be no change on other page or other browser.
To solve it, you could use something like pusher to send events each time when somebody will trigger an event in your app and receive this events on your home page. If functionality of your app isn't complicated, you can use your own push server like faye in your rails app. Here is reference to railscast about using it:
http://railscasts.com/episodes/260-messaging-with-faye?view=similar
Anyway, i prefer to use pusher every time when I need to add some realtime functionality in my app.
http://pusher.com/tutorials
And about voting process, the nice solutions for doing that is:
"twitter/activerecord-reputation-system"
If you don't want to use ajax which will update your voted restaurant page's content, you can add vote method in your controller with redirect_to :back . This instruction will redirect your app to new url and after finish whole method will redirect back to refreshed page with updated voting status.
def vote
value = params[:type] == "up" ? 1 : -1
#haiku = Haiku.find(params[:id])
#haiku.add_or_update_evaluation(:votes, value, current_user)
redirect_to :back, notice: "Thank you for voting!"
end
To refresh home page dynamically when other users will vote on restaurants, u should create a rake task which will update the information on the page based on updated database structure.

Keep session alive for long running process with jQuery

In an ASP.NET Web Forms application I have a parent form, which contains another content page within an IFrame. When user clicks on a link within the content page, a long running process (> 30 min) is started. Upon completion a popup is displayed to the user indicating number of records processed.
I need to prevent session timeout programatically, without changing the default 20 min in Web.config.
I have been trying to implement the Heartbeat example posted here (and all over the web, so I know it should work)
Keeping ASP.NET Session Open / Alive
, but it appears that it's used mostly for idle sessions.
In my case, once the content page request goes to server side and long running process is initiated, the HTTP Handler is not called. When the process completes, all the calls are made immediately one after another like they have been "queued".
Here's my HTTP Handler:
<%# WebHandler Language="VB" Class="KeepSessionAliveHandler" %>
Imports System
Imports System.Web
Public Class KeepSessionAliveHandler
Implements IHttpHandler, SessionState.IRequiresSessionState
Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest
context.Session("heartbeat") = DateTime.Now
context.Response.AddHeader("Content-Length", "0")
End Sub
Public ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable
Get
Return False
End Get
End Property
End Class
Javascript function in Head element for parent page. Create interval calling the handler every 8 seconds (to be increased to 10 min in production).
function KeepSessionAlive()
{
if (intervalKeepAliveID)
clearTimeout(intervalKeepAliveID);
intervalKeepAliveID = setInterval(function()
{
$.post("KeepSessionAliveHandler.ashx", null, function()
{
// Empty function
});
}, 8000);
}
intervalKeepAliveID is declared in a main Javascript file included in all pages of the application.
This is the code for my onclick event in the content page Head
$(document).ready(function()
{
// Ensuring my code is executed before ASP.NET generated script
$("#oGroup_lnkSubmit_lnkButton").attr("onclick", null).removeAttr("onclick").click(function()
{
// Prevent the browser from running away
// e.preventDefault();
window.parent.KeepSessionAlive();
// Wave goodbye
//window.location.href = $(this).attr('href');
WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions($(this).attr("name"), "", true, "", "", false, false));
});
});
Somewhere I read that Javascript runs in a single thread, but given that fact that my repeating interval is outside the content page, I do not believe this should apply here...
It's not an issue with JS being single-threaded - The A of AJAX stands for Asynchronous - eg it doesn't block (even if you tell it to block, it really just preserves state until a response is received)
From this MSDN article...
Access to ASP.NET session state is exclusive per session, which means that if two different users make concurrent requests, access to each separate session is granted concurrently. However, if two concurrent requests are made for the same session (by using the same SessionID value), the first request gets exclusive access to the session information. The second request executes only after the first request is finished. (The second session can also get access if the exclusive lock on the information is freed because the first request exceeds the lock time-out.) If the EnableSessionState value in the # Page directive is set to ReadOnly, a request for the read-only session information does not result in an exclusive lock on the session data. However, read-only requests for session data might still have to wait for a lock set by a read-write request for session data to clear.
See this page for a more detailed explanation of the problem and a workaround that gives you greater control of how the blocking is implemented and a potential workaround.
I think you've got a couple of moving parts here that are blocking each other from behaving correctly.
Because your process is running in the server thread, this blocks other requests from being processed.
Because the keepalive depends on getting a response from the server, it doesn't complete.
I'd suggest that you look into a solution like ASP.NET SignalR, along with spawning the long-running process as a separate thread so that your server can continue to service incoming requests.

Categories