Logging from Chrome console to Python - javascript

I'm looking for a way get a console output from Google Chrome into my python program. I have a script coded in JS that finishes in around 1 second, but my python implementation (exactly same logic, etc, the only diff is that it's in Python and not JS) takes about 15 seconds to run. Therefore I'm looking for a way to get the printout in Chrome console to my python program.
This is the current way I'm doing it:
Python program uses pyautogui to click and does what it needs to do inside to trigger the function running in JS.
JS completes the function in 1s and prints to console, something like:
(22) [6, 4, 4, 6, 0, 0, 2, 4, 4, 6, 4, 2, 4, 4, 6, 0, 0, 2, 4, 4, 6, 0]
I need to get that into my python program so it can does the rest of the stuff..

Related

Why does passing a large dictionary from python to julia flatten contained multidimensional lists?

Okay so...I'm in the process of creating a python client for a coding game I play. This game is originally created in javascript and employs a json object of game data found at https://adventure.land/data.js. Within this data, each map has a list of doors, and each door within this list is a list of attributes. For example:
"doors":[[-965,-176,24,30,"woffice",0,1],[536,1665,64,32,"tunnel",0,2],[168,-149,32,40,"bank",0,3],[160,1370,24,32,"cave",0,4],[232,384,24,30,"arena",0,6],[-472,131,24,30,"tavern",0,8],[616,610,32,40,"mansion",0,10],[1936,-23,24,24,"level1",1,11],[169,-404,24,40,"hut",0,14],[1600,-547,60,40,"halloween",4,15],[312,-335,32,32,"mtunnel",0,16],[967,-584,32,32,"mtunnel",1,17],[1472,-434,32,32,"mtunnel",2,18]]
However, when sent from python to julia, it suddenly becomes:
"doors":[-965,-176,24,30,"woffice",0,1,536,1665,64,32,"tunnel",0,2,168,-149,32,40,"bank",0,3,160,1370,24,32,"cave",0,4,232,384,24,30,"arena",0,6,-472,131,24,30,"tavern",0,8,616,610,32,40,"mansion",0,10,1936,-23,24,24,"level1",1,11,169,-404,24,40,"hut",0,14,1600,-547,60,40,"halloween",4,15,312,-335,32,32,"mtunnel",0,16,967,-584,32,32,"mtunnel",1,17,1472,-434,32,32,"mtunnel",2,18]
Is there some way to prevent this from happening? If not, is there a possible workaround? Because my for door in doors code in python won't exactly have the same effect when given a direct translation to julia considering the random list flattening.
Edit: I've been asked to provide "minimum reproducible code" but I'm not too sure what the minimum is so I'll just say...
create a some function in julia that prints a dictionary
function printDict(d)
println(d)
end
create a dictionary in python that contains a nested list
nestedlistdict = {"data": [[0,1,2,3],[4,5,6,7]]}
call the printDict function from python while sending the nestedlistdict variable as an argument
from julia import Main
Main.include("path/to/your/julia/file")
Main.file.printDict(nestedlistdict)
result should be:
Dict{Any, Any}("data" => [[0, 1, 2, 3], [4, 5, 6, 7]])but instead my result is Dict{Any, Any}("data" => [0 1 2 3; 4 5 6 7])
Edit2: upon further testing, it seems to be having the issue because it converts a nested list into a Matrix object instead of a nested Vector object.
Julia uses the same interpretation of list of lists as numpy:
>>> np.array([[1,2,3],[4,5,6]])
array([[1, 2, 3],
[4, 5, 6]])
or in Julia:
julia> using PyCall
julia> py"[[1,2,3],[4,5,6]]"
2×3 Matrix{Int64}:
1 2 3
4 5 6
If you want to have an actual vector of vectors you need to do the same way as you need to do a vector of numpy arrays.
Python version:
>>> [np.array([1,2,3]),np.array([1,2,3])]
[np.array([1, 2, 3]), np.array([1, 2, 3])]
Julia version (this is what you are actually asking for):
julia> np = pyimport("numpy");
julia> py"[$np.array([1, 2, 3]), $np.array([1, 2, 3])]"
2-element Vector{Vector{Int32}}:
[1, 2, 3]
[1, 2, 3]
Another, perhaps simpler option could be packing that all to a tuple:
julia> py"tuple([[1,2,3],[4,5,6]])"
([1, 2, 3], [4, 5, 6])

Convert audio file to numbers array to visualizate it

I'm working on a React Native app where I want to play an audio file and visualize it, I didn't find a suitable package for it and decided to make it myself.
I made everything but audio visualization. To visualizate a file I need some kind of library that will analyze audio for me and return numbers array. I will use each number of the array as a point on my future graph.
Let's imagine I have this package, ideally I would like to use like this:
const audioPath = somePackage.analyzeAudio(audio.url);
console.log(audioPath);
// Output: [0, 0, 1, 2, 5, 10, 8, 0]
In array [0, 0, 1, 2, 5, 10, 8, 0] I will understand that at the beginning the audio has no sound at all then it's getting louder and at the end it's silent again. Later I can use these numbers to plot a graph.
Is there a way to do it ?
I couldn't find anything useful to analyze an audio on client side so I decided to do it on server (nodejs) and then send parsed data to client.
I implemented it with help of this package => https://github.com/audiojs/web-audio-api.
This code helped me a lot => https://github.com/victordibia/beats/blob/master/beats.js

How to find pattern with brain.js

Disclaimer: I am new to machine learning. Please forgive me if I am asking a super dumb question.
Hi I am trying to find pattern in a set of numbers using brain.js.
I have expanded on the example in brain.js github page.
const net = new brain.recurrent.LSTMTimeStep({
inputSize: 3,
hiddenLayers: [10],
outputSize: 3
});
net.train([
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7],
[8, 8, 8]
]);
const output = net.run([[7, 7, 7], [8, 8, 8]]);
I was trying to get an output of [9, 9, 9], but I am mostly getting [8, 8, 8].
But if I try running const output = net.run([[5, 5, 5], [6, 6, 6]]); I am easily getting [7, 7, 7] & consecutive output with other numbers in the training data sequence.
Is there anyway to train it so that I can get the desired output and the use it for other patters?
You are using a LSTM time step recurrent neural network trained using "supervised learning": https://en.wikipedia.org/wiki/Supervised_learning, which provides similar output values it has been given previously to the .run() call.
What it seems you want is a LSTM time step recurrent neural network trained using "reinforcement learning": https://en.wikipedia.org/wiki/Reinforcement_learning, which can output values it has seen before, but also new ones, using values it has never been given previously to the .run() call, based off exploitation and exploratory (random) findings.
Brain.js doesn't yet have a reinforcement learning API, although that will be the focus after v2 stabilizes.
Ty for providing a clear question and reproduction steps. I've included your sample, working in the browser so others can play with it: https://jsfiddle.net/robertleeplummerjr/4o2vcajt/1/

Exporting chrome output to python

I'm looking for a way get a console output from Google Chrome into my python program. I have a script coded in JS that finishes in around 1 second, but my python implementation (exactly same logic, etc., the only difference is that it's in python and not JS) takes about 15 seconds to run. Therefore I'm looking for a way to get the printout in Chrome console to my python program.
This is the current way I'm doing it:
Python program uses pyautogui to click and does what it needs to do inside to trigger the function running in JS.
JS completes the function in 1s and prints to console, something like:
(22) [6, 4, 4, 6, 0, 1, 1 2, 4, 4, 6, 4, 2, 4, 4, 6, 0, 0, 2, 4, 4, 6, 0]
I would like to find a way to get this output into python as I have another script that takes the output and does further calculations to it.
I've thought of using Selenium but that'd introduce too much overhead (probably requiring 3s+ for waiting for chrome to open up).

Create "Karaoke" Type Functionality in DraftJS

I am attempting to implement a DraftJS editor that highlights words in a transcription while its recorded audio is playing (kind of like karaoke).
I receive the data in this format:
[
{
transcript: "This is the first block",
timestamps: [0, 1, 2.5, 3.2, 4.1, 5],
},
{
transcript: "This is the second block. Let's sync the audio with the words",
timestamps: [6, 7, 8.2, 9, 10, 11.3, 12, 13, 14, 15, 16, 17.2],
},
...
]
I then map this received data to ContentBlocks and initialize the editor's ContentState with them by using ContentState.createFromBlockArray(blocks)
It seems like the "DraftJS" way of storing the timestamp metadata would be to create an Entity for each word with its respective timestamp, and then scan through the currentContent as the audio plays and highlight entities up until the current elapsed time. But I am not sure if this is the right way to do this, as it doesn't seem performant for large transcriptions.
Note: the transcript needs to remain editable while maintaining this karaoke functionality
Any help or discussion is appreciated!
I ended up doing exactly what I described in the question: store timestamps in DraftJS entities. After a few more weeks with DraftJS it seems this is the correct way to do this.

Categories