I cannot manage to run one test script on more than one device no matter what.
I've got one test apk and test script pulled from some site as an example that finds a textbox in the app then inputs "Hello World!" into it then the script is done. I am trying to test the script on two devices as for now. I've created four batch scripts in which two run two instances of appium servers with different parameters and the other two that run two instances of the test script with different parameters also (which include capabilities).
Construction of the batch files:
run-servers.bat
start "Appium Server 1" appium -p 5000 -bp 5100 --session-override
start "Appium Server 2" appium -p 5001 -bp 5101 --session-override
(I do not know what --session-override is supposed to do exactly, since no description of it on the internet contains detailed, but with or without it, same results occur).
run-testscript.bat
start "Test 1" node testing.js 5000 9 Emulator-9 emulator-5554
start "Test 2" node testing.js 5001 7 Emulator-7 emulator-5556
(The extra parameters after the script file are:
<Port> <Android-Version> <Device Name> <Unique ID>)
And the script:
const driver = require("webdriverio");
const args = process.argv;
const caps = {
port: parseInt(args[2]),
capabilities: {
platformName: "Android",
platformVersion: args[3],
deviceName: args[4],
app: "D:/Node/Appium/Test/apk/ApiDemos-debug.apk",
appPackage: "io.appium.android.apis",
appActivity: ".view.TextFields",
automationName: "UiAutomator2",
uniqueID: args[5]
}
};
async function test(caps) {
const client = await driver.remote(caps);
const field = await client.$("android.widget.EditText");
await field.setValue("Hello World!");
const value = await field.getText();
assert.equal(value, "Hello World!");
await client.deleteSession();
}
test(caps);
When I run two instances of the test, the app starts on both devices, but on one device it doesn't input "Hello World!" while on the other does. There is also the "ECONNRESET: A server-side error occurred blah-blah" on the server which the device without input is on.
You need to add systemPort in your Appium configuration. Use different systemPort values for every device (e.g, 8201, 8202, etc.).
Please read Appium Desired Capabilities.
Related
I have a cloud function that starts a Compute Engine instance. However, when multiple functions are triggered, the previous actions running on the Compute Engine are interrupted by the incoming functions/commands. The Compute Engine is running a pytorch implementation... Would there be a way to send those incoming functions to a queue, so that the current action running on Compute Engine completes before picking up the next incoming action before shutting the machine down? Any conceptual guidance would be greatly appreciated.
EDIT
My function is triggered on changes to a Storage bucket (uploads). In the function, I start a GCE instance and customize its startup behavior with a startup script as follows (some commands and directories are simplified for brevity):
import os
from googleapiclient.discovery import build
def start(event, context):
file = event
print(file["id"])
string = file["id"]
newstring = string.split('/')
userId = newstring[1]
paymentId = newstring[2]
name = newstring[3]
print(name)
if name == "uploadcomplete.txt":
startup_script = """#! /bin/bash
cd ~ && pwd 1>>/var/log/log.out 2>&1
PATH=$PATH://usr/local/cuda 1>>/var/log/log.out 2>&1
cd program_directory 1>>/var/log/log.out 2>&1
source /opt/anaconda3/etc/profile.d/conda.sh 1>/var/log/log.out 2>&1
conda activate env
cd keras-retinanet/ 1>>/var/log/log.out 2>&1
export PYTHONPATH=`pwd` 1>>/var/log/log.out 2>&1
cd tracker 1>>/var/log/log.out 2>&1
python program_name --gcs_input_path gs://input/{userId}/{paymentId} --gcs_output_path gs://output/{userId}/{paymentId} 1>>/var/log/log.out 2>&1
sudo python3 gcs_to_mongo.py {userId} {paymentId} 1>>/var/log/log.out 2>&1
sudo shutdown -P now
""".format(userId=userId, paymentId=paymentId)
service = build('compute', 'v1', cache_discovery=False)
print('VM Instance starting')
project = 'XXXX'
zone = 'us-east1-c'
instance = 'YYYY'
metadata = service.instances().get(project=project, zone=zone, instance=instance)
metares = metadata.execute()
print(metares)
fingerprint = metares["metadata"]["fingerprint"]
print(fingerprint)
bodydata = {"fingerprint": fingerprint,
"items": [{"key": "startup-script", "value": startup_script}]}
print(bodydata)
meta = service.instances().setMetadata(project=project, zone=zone, instance=instance,
body=bodydata)
res = meta.execute()
instanceget = service.instances().get(project=project, zone=zone, instance=instance).execute()
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
print('VM Instance started')
print(instanceget)
print("'New Metadata:", instanceget['metadata'])
The problem occurs when multiple batches are uploaded to Cloud Storage. Each new function will restart the GCE instance with a new startup script and begin work on the new data, leaving the previous data unfinished.
Cloud Tasks indeed meet your requirement, Cloud Tasks lets you separate out pieces of work that can be performed independently, outside of your main application flow, and send them off to be processed, asynchronously, using handlers that you create. Let me know if you have more questions about adapting Cloud Tasks into your current system structure.
Since your GCE instances are created by Cloud Function. Cloud Tasks can call your Cloud Function through its HTTP endpoint with a public IP address. So you Cloud Tasks are HTTP targets. You can follow the Google Cloud Taks Creating HTTP Target tasks for the explanation and sample codes.
Also, GCP provided the tutorial Using Cloud Tasks to trigger Cloud Functions exactly the same system design exactly matches your current requirement.
I am sending logs directly to Elasticsearch from a Node.js app using the winston and winston-elasticsearch packages. Elasticsearch 7.5.1, Logstash & Kibana 7.5.1 were deployed on a remote server using Docker Compose.
Problem 1: After running the node.js file that sends 2 log messages to Elasticsearch, the program does not automatically exit to return to the terminal. Using Node.js v12.6.0 on Mac OS X Mojave 10.14.6.
Problem 2: After these 2 log messages were sent to Elasticsearch, they can be viewed using a web browser at http://<example.com>:9200/logs-2020.02.01/_search.
{"took":5,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":2,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"logs-2020.02.01","_type":"_doc","_id":"85GgA3ABiaPPk4as1pEc","_score":1.0,"_source":{"#timestamp":"2020-02-02T02:00:35.789Z","message":"a debug message","severity":"debug","fields":{}}},{"_index":"logs-2020.02.01","_type":"_doc","_id":"9JGgA3ABiaPPk4as1pEc","_score":1.0,"_source":{"#timestamp":"2020-02-02T02:00:35.791Z","message":"an info log","severity":"info","fields":{}}}]}}
However, these logs do not show up on Kibana, such as the Logs section at https://<example.com>/app/infra#/logs/stream?_g=().
Any idea how to get the logs to also show up on Kibana? Also, why is the Node.js app not exiting after sending the log messages?
Thank you!
Node.js App
const winston = require('winston');
const ElasticsearchWinston = require('winston-elasticsearch');
const options = {
console: {
level: 'debug',
handleExceptions: true,
json: false,
colorize: true
},
elasticsearch: {
level: 'debug',
clientOpts: {
node: 'http://user:pass#example.com:9200',
log: 'debug',
maxRetries: 2,
requestTimeout: 10000,
sniffOnStart: false,
}
}
}
var logger = winston.createLogger({
exitOnError: false,
transports: [
new winston.transports.Console(options.console),
new ElasticsearchWinston(options.elasticsearch)
]
});
logger.debug('a debug message');
logger.info('an info log');
I'm not a node.js expert so I will only focus on the kibana issue. The Logs app is not meant to be for "custom" logs/indices like yours.
As stated in the documentation (https://www.elastic.co/guide/en/kibana/current/xpack-logs.html):
The Logs app in Kibana enables you to explore logs for common servers, containers, and services.
The logs app is for monitoring your infrastructure and ELK-Services, e.g. through certain Beats-modules (e.g. the Elasticsearch-, Kibana- and Logstash-Module of Filebeat).
Also from the docs (https://www.elastic.co/guide/en/kibana/current/xpack-logs-configuring.html):
The default source configuration for logs is specified in the Logs app settings in the Kibana configuration file. The default configuration uses the filebeat-* index pattern to query the data.
This explains why you dont see any data in the logs app since your indices use the 'logs-*' index pattern.
Long story short:
To view the documents in your log-* indices, you need to open the Discovery (first icon on the left sidebar in Kibana) and select the index pattern you already have set up. This is the appropriate way of searching your application data in Kibana.
I hope I could help you.
I'm working on a small cli tool that can automatically deploy Google Home actions based on the projects that are setup in a directory.
Basically my script checks the directories and then asks which project to deploy. The actual command that should run is coming from Google's cli gactions
To run it with arguments I setup a spawned process in my node-script:
const { spawn } = require('child_process')
const child = spawn('./gactions', [
'update',
'--action-package',
'<PATH-TO-PACKAGE>',
'--project',
'<PROJECT-NAME>'
])
child.stdout.on('data', data => {
console.log(data)
}
However, the first time a project is deployed, the gactions cli will prompt for an authorization code. Running the code above, I can actually see the prompt, but the script won't proceed when actually entering that code.
I guess there must be some way in the child process to capture that input? Or isn't this possible at all?
Simply pipe all standard input from the parent process to the child and all output from the child to the parent.
The code below is a full wrapper around any shell command, with input/output/error redirection:
const { spawn } = require('child_process');
var child = spawn(command, args);
child.stdout.pipe(process.stdout);
child.stderr.pipe(process.stderr);
process.stdin.pipe(child.stdin);
child.on('exit', () => process.exit())
Note that if you pipe stdout you don't need handle the data event anymore.
require( "child_process" ).spawnSync( "sh", [ "-c", "npm adduser" ], { stdio: "inherit", stdin: "inherit" } );
this will execute the command given as we normally do in terminal.
I want to know how I can verify if a file was downloaded using Selenium Webdriver after I click the download button.
Your question doesn't say whether you want to confirm it locally or remotely(like browserstack) . If it is remotely then my answer will be "NO" as you can see that the file is getting downloaded but you can not access the folder. So you wont be able to assert that the file has been downloaded.
If you want to achieve this locally(in Chrome) then the answer is "YES", you can do it something like this:
In wdio.conf.js(To know where it is getting downloaded)
var path = require('path');
const pathToDownload = path.resolve('chromeDownloads');
// chromeDownloads above is the name of the folder in the root directory
exports.config = {
capabilities: [{
maxInstances: 1,
browserName: 'chrome',
os: 'Windows',
chromeOptions: {
args: [
'user-data-dir=./chrome/user-data',
],
prefs: {
"download.default_directory": pathToDownload,
}
}
}],
And your spec file(To check if the file is downloaded or not ?)
const fsExtra = require('fs-extra');
const pathToChromeDownloads = './chromeDownloads';
describe('User can download and verify a file', () =>{
before(() => {
// Clean up the chromeDownloads folder and create a fresh one
fsExtra.removeSync(pathToChromeDownloads);
fsExtra.mkdirsSync(pathToChromeDownloads);
});
it('Download the file', () =>{
// Code to download
});
it('Verify the file is downloaded', () =>{
// Code to verify
// Get the name of file and assert it with the expected name
});
});
more about fs-extra : https://www.npmjs.com/package/fs-extra
Hope this helps.
TL;DR: Unless your web-app has some kind of visual/GUI trigger once the download finishes (some text, an image/icon-font, push-notification, etc.), then the answer is a resounding NO.
Webdriver can't go outside the scope of your browser, but your underlying framework can. Especially if you're using NodeJS. :)
Off the top of my head I can think of a few ways I've been able to do this in the past. Choose as applicable:
1. Verify if the file has been downloaded using Node's File System (aka fs)
Since you're running WebdriverIO, under a NodeJS environment, then you can make use its powerful lib tool-suite. I would use fs.exists, or fs.existsSync to verify if the file is in the expected folder.
If you want to be diligent, then also use fs.statSync in conjunction with fs.exists & poll the file until it has the expected size (e.g.: > 2560 bytes)
There are multiple examples online that can help you put together such a script. Use the fs documentation, but other resources as well. Lastly, you can add said script inside your it/describe statement (I remember your were using Mocha).
2. Use child_process's exec command to launch third-party scripts
Though this method requires more work to setup, I find it more relevant on the long run.
!!! Caution: Apart from launching the script, you need to write a script in a third-party framework.
Using an AutoIT script;
Using a Sikuli script;
Using a TestComplete (not linking it, I don't like it that much), or [insert GUI verification script here] script;
Note: All the above frameworks can generate an .exe file that you can trigger from your WebdriverIO test-cases in order to check if your file has been downloaded, or not.
Steps to take:
create one of the stand-alone scripts like mentioned above;
place the script's .exe file inside your project in a known folder;
use child_process.exec to launch the script and assert its result after it finishes its execution;
Example:
exec = require('child_process').exec;
// Make sure you also remove the .exe from scriptName
var yourScript = pathToScript + scriptName;
var child = exec(yourScript);
child.on('close', function (code, signal) {
if (code!==0) {
callback.fail(online.online[module][code]);
} else {
callback();
}
});
Finally: I'm sure there are other ways to do it. But, your main take-away from such a vague question should be: YES, you can verify if the file has been downloaded if you absolutely must, expecially if this test-case is CRITICAL to your regression-run.
I'm trying to get webdriver.io and Jasmine working.
Following their example, my script is at test/specs/first/test2.js (in accordance with the configuration) and contains:
var webdriverio = require('webdriverio');
describe('my webdriverio tests', function() {
var client = {};
jasmine.DEFAULT_TIMEOUT_INTERVAL = 9999999;
beforeEach(function() {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init();
});
it('test it', function(done) {
client
.url("http://localhost:3000/")
.waitForVisible("h2.btn.btn-primary")
.click("h2.btn.btn-primary")
.waitForVisible("h2.btn.btn-primary")
.call(done);
});
afterEach(function(done) {
client.end(done);
});
});
I'm using wdio as the test runner, and set it up using the interactive setup. That config is automatically-generated and all pretty straightforward, so I don't see a need to post it.
In another terminal window, I am running selenium-server-andalone-2.47.1.jar with Java 7. I do have Firefox installed on my computer (it blankly starts when the test is run), and my computer is running OS 10.10.5.
This is what happens when I start the test runner:
$ wdio wdio.conf.js
=======================================================================================
Selenium 2.0/webdriver protocol bindings implementation with helper commands in nodejs.
For a complete list of commands, visit http://webdriver.io/docs.html.
=======================================================================================
[18:17:22]: SET SESSION ID 46731149-79aa-412e-b9b5-3d32e75dbc8d
[18:17:22]: RESULT {"platform":"MAC","javascriptEnabled":true,"acceptSslCerts":true,"browserName":"firefox","rotatable":false,"locationContextEnabled":true,"webdriver.remote.sessionid":"46731149-79aa-412e-b9b5-3d32e75dbc8d","version":"40.0.3","databaseEnabled":true,"cssSelectorsEnabled":true,"handlesAlerts":true,"webStorageEnabled":true,"nativeEvents":false,"applicationCacheEnabled":true,"takesScreenshot":true}
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
at waitForVisible("h2.btn.btn-primary") - test2.js:21:14
/usr/local/lib/node_modules/webdriverio/node_modules/q/q.js:141
throw e;
^
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
0 passing (3.90s)
$
I find this very strange and inexplicable, especially considering that it even prints the session ID.
Any ideas?
Please check out the docs on the wdio test runner. You don't need to create an instance using init on your own. The wdio test runner takes care on creating and ending the session for you.
Your example covers the standalone WebdriverIO usage (without testrunner). You can find examples which use wdio here.
To clarify that: there are two ways of using WebdriverIO. You can embed it in your test system by yourself (using it as standalone / or as a scraper ). Then you need to take care of things like create and end an instance or run those in parallel. The other way to use WebdriverIO is using its test runner called wdio. The testrunner takes a config file with a bunch of information on your test setup and spawns instances updates job information on Sauce Labs and so on.
Every Webdriver command gets executed asynchronously.
You properly called the done callback in afterEach and in your test it test, but forgot to do it in beforeEach:
beforeEach(function(done) {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init(done);
});