How should I create Javascript that works on different platforms? - javascript

I'm working on a project that basically shall result in something like "source-to-source compiler" for javascript. Actually this is just the question if it shall result in some kind of compiler. Here's what I want to do:
I write Webapps in a generic way that shall be transformed into mobile device specific apps. So basically it's just like:
|Generic call| ====transformed to====> Device Specific Call
So I've got a set of generic calls I define (e.g. Foo.locateByGPS) that shall be transformed into code of the device specific native calls. So the order is the following:
Write app: javascript mixed with own defined generic code
Choose target device and give this app to the "compiler" that creates a hybrid app (native parts with web parts).
Run it on the mobile device.
Beneath the generic code all the rest is standard javascript code that's running on all devices (respectively all browser/webviews on these devices).
Do I build a (source-to-source) compiler for this transformation?
I'm new to this topic, so I'm very thankful for some hints.

I wouldn't. If you use functions you can properly encapsulate all the platform specific code.
Now you've got 2 possibilities:
Create two script files that contain only the platform relevant code. -> You write the platform specific functions twice. The advantage is that you don't have to load the code for both platforms if you're only interested in one.
or use a if clause in every function. Ideally you could use feature-detection. This method seems to be more future-proof (more than 2 platforms?) and makes it easier to maintain the code. (Small changes are simpler to apply with all the code in one place.) Another advantage is that functions which are only partly different can share some code.
function locateWithGPS() {
if (navigator.geolocation) {
// do something
} else {
// use a work-around
}
}

Related

Why do we use mockImplementation instead of using actual function which we need to test? [duplicate]

What is Mocking?                                                                                                    .
Prologue: If you look up the noun mock in the dictionary you will find that one of the definitions of the word is something made as an imitation.
Mocking is primarily used in unit testing. An object under test may have dependencies on other (complex) objects. To isolate the behaviour of the object you want to test you replace the other objects by mocks that simulate the behaviour of the real objects. This is useful if the real objects are impractical to incorporate into the unit test.
In short, mocking is creating objects that simulate the behaviour of real objects.
At times you may want to distinguish between mocking as opposed to stubbing. There may be some disagreement about this subject but my definition of a stub is a "minimal" simulated object. The stub implements just enough behaviour to allow the object under test to execute the test.
A mock is like a stub but the test will also verify that the object under test calls the mock as expected. Part of the test is verifying that the mock was used correctly.
To give an example: You can stub a database by implementing a simple in-memory structure for storing records. The object under test can then read and write records to the database stub to allow it to execute the test. This could test some behaviour of the object not related to the database and the database stub would be included just to let the test run.
If you instead want to verify that the object under test writes some specific data to the database you will have to mock the database. Your test would then incorporate assertions about what was written to the database mock.
Other answers explain what mocking is. Let me walk you through it with different examples. And believe me, it's actually far more simpler than you think.
tl;dr It's an instance of the original class. It has other data injected into so you avoid testing the injected parts and solely focus on testing the implementation details of your class/functions.
Simple example:
class Foo {
func add (num1: Int, num2: Int) -> Int { // Line A
return num1 + num2 // Line B
}
}
let unit = Foo() // unit under test
assertEqual(unit.add(1,5),6)
As you can see, I'm not testing LineA ie I'm not validating the input parameters. I'm not validating to see if num1, num2 are an Integer. I have no asserts against that.
I'm only testing to see if LineB (my implementation) given the mocked values 1 and 5 is doing as I expect.
Obviously in the real word this can become much more complex. The parameters can be a custom object like a Person, Address, or the implementation details can be more than a single +. But the logic of testing would be the same.
Non-coding Example:
Assume you're building a machine that identifies the type and brand name of electronic devices for an airport security. The machine does this by processing what it sees with its camera.
Now your manager walks in the door and asks you to unit-test it.
Then you as a developer you can either bring 1000 real objects, like a MacBook pro, Google Nexus, a banana, an iPad etc in front of it and test and see if it all works.
But you can also use mocked objects, like an identical looking MacBook pro (with no real internal parts) or a plastic banana in front of it. You can save yourself from investing in 1000 real laptops and rotting bananas.
The point is you're not trying to test if the banana is fake or not. Nor testing if the laptop is fake or not. All you're doing is testing if your machine once it sees a banana it would say not an electronic device and for a MacBook Pro it would say: Laptop, Apple. To the machine, the outcome of its detection should be the same for fake/mocked electronics and real electronics. If your machine also factored in the internals of a laptop (x-ray scan) or banana then your mocks' internals need to look the same as well. But you could also use a MacBook that no longer works.
Had your machine tested whether or not devices can power on then well you'd need real devices.
The logic mentioned above applies to unit-testing of actual code as well. That is a function should work the same with real values you get from real input (and interactions) or mocked values you inject during unit-testing. And just as how you save yourself from using a real banana or MacBook, with unit-tests (and mocking) you save yourself from having to do something that causes your server to return a status code of 500, 403, 200, etc (forcing your server to trigger 500 is only when server is down, while 200 is when server is up.
It gets difficult to run 100 network focused tests if you have to constantly wait 10 seconds between switching over server up and down). So instead you inject/mock a response with status code 500, 200, 403, etc and test your unit/function with a injected/mocked value.
Be aware:
Sometimes you don't correctly mock the actual object. Or you don't mock every possibility. E.g. your fake laptops are dark, and your machine accurately works with them, but then it doesn't work accurately with white fake laptops. Later when you ship this machine to customers they complain that it doesn't work all the time. You get random reports that it's not working. It takes you 3 months to figure out that the color of fake laptops need to be more varied so you can test your modules appropriately.
For a true coding example, your implementation may be different for status code 200 with image data returned vs 200 with image data not returned. For this reason it's good to use an IDE that provides code coverage e.g. the image below shows that your unit-tests don't ever go through the lines marked with brown.
image source
Real world coding Example:
Let's say you are writing an iOS application and have network calls.Your job is to test your application. To test/identify whether or not the network calls work as expected is NOT YOUR RESPONSIBILITY . It's another party's (server team) responsibility to test it. You must remove this (network) dependency and yet continue to test all your code that works around it.
A network call can return different status codes 404, 500, 200, 303, etc with a JSON response.
Your app is suppose to work for all of them (in case of errors, your app should throw its expected error). What you do with mocking is you create 'imaginary—similar to real' network responses (like a 200 code with a JSON file) and test your code without 'making the real network call and waiting for your network response'. You manually hardcode/return the network response for ALL kinds of network responses and see if your app is working as you expect. (you never assume/test a 200 with incorrect data, because that is not your responsibility, your responsibility is to test your app with a correct 200, or in case of a 400, 500, you test if your app throws the right error)
This creating imaginary—similar to real is known as mocking.
In order to do this, you can't use your original code (your original code doesn't have the pre-inserted responses, right?). You must add something to it, inject/insert that dummy data which isn't normally needed (or a part of your class).
So you create an instance the original class and add whatever (here being the network HTTPResponse, data OR in the case of failure, you pass the correct errorString, HTTPResponse) you need to it and then test the mocked class.
Long story short, mocking is to simplify and limit what you are testing and also make you feed what a class depends on. In this example you avoid testing the network calls themselves, and instead test whether or not your app works as you expect with the injected outputs/responses —— by mocking classes
Needless to say, you test each network response separately.
Now a question that I always had in my mind was: The contracts/end points and basically the JSON response of my APIs get updated constantly. How can I write unit tests which take this into consideration?
To elaborate more on this: let’s say model requires a key/field named username. You test this and your test passes.
2 weeks later backend changes the key's name to id. Your tests still passes. right? or not?
Is it the backend developer’s responsibility to update the mocks. Should it be part of our agreement that they provide updated mocks?
The answer to the above issue is that: unit tests + your development process as a client-side developer should/would catch outdated mocked response. If you ask me how? well the answer is:
Our actual app would fail (or not fail yet not have the desired behavior) without using updated APIs...hence if that fails...we will make changes on our development code. Which again leads to our tests failing....which we’ll have to correct it. (Actually if we are to do the TDD process correctly we are to not write any code about the field unless we write the test for it...and see it fail and then go and write the actual development code for it.)
This all means that backend doesn’t have to say: “hey we updated the mocks”...it eventually happens through your code development/debugging. ‌ّBecause it’s all part of the development process! Though if backend provides the mocked response for you then it's easier.
My whole point on this is that (if you can’t automate getting updated mocked API response then) human interaction is likely required ie manual updates of JSONs and having short meetings to make sure their values are up to date will become part of your process
This section was written thanks to a slack discussion in our CocoaHead meetup group
Confusion:
It took me a while to not get confused between 'unit test for a class' and the 'stubs/mocks of a class'.
E.g. in our codebase we have:
class Device
class DeviceTests
class MockDevice
class DeviceManager
class Device is the actual class itself.
class DeviceTests is where we write unit-tests for the Device class
class MockDevice is a mock class of Device. We use it only for the purpose of testing. e.g. if our DeviceManager needs to get unit-tested then we need dummy/mock instances of the Device class. The MockDevice can be used to fulfill the need of dummy/mock instances.
tldr you use mock classes/objects to test other objects. You don't use mock objects to test themselves.
For iOS devs only:
A very good example of mocking is this Practical Protocol-Oriented talk by Natasha Muraschev. Just skip to minute 18:30, though the slides may become out of sync with the actual video 🤷‍♂️
I really like this part from the transcript:
Because this is testing...we do want to make sure that the get function
from the Gettable is called, because it can return and the function
could theoretically assign an array of food items from anywhere. We
need to make sure that it is called;
There are plenty of answers on SO and good posts on the web about mocking. One place that you might want to start looking is the post by Martin Fowler Mocks Aren't Stubs where he discusses a lot of the ideas of mocking.
In one paragraph - Mocking is one particlar technique to allow testing of a unit of code with out being reliant upon dependencies. In general, what differentiates mocking from other methods is that mock objects used to replace code dependencies will allow expectations to be set - a mock object will know how it is meant to be called by your code and how to respond.
Your original question mentioned TypeMock, so I've left my answer to that below:
TypeMock is the name of a commercial mocking framework.
It offers all the features of the free mocking frameworks like RhinoMocks and Moq, plus some more powerful options.
Whether or not you need TypeMock is highly debatable - you can do most mocking you would ever want with free mocking libraries, and many argue that the abilities offered by TypeMock will often lead you away from well encapsulated design.
As another answer stated 'TypeMocking' is not actually a defined concept, but could be taken to mean the type of mocking that TypeMock offers, using the CLR profiler to intercept .Net calls at runtime, giving much greater ability to fake objects (not requirements such as needing interfaces or virtual methods).
Mock is a method/object that simulates the behavior of a real method/object in controlled ways. Mock objects are used in unit testing.
Often a method under a test calls other external services or methods within it. These are called dependencies. Once mocked, the dependencies behave the way we defined them.
With the dependencies being controlled by mocks, we can easily test the behavior of the method that we coded. This is Unit testing.
What is the purpose of mock objects?
Mocks vs stubs
Unit tests vs Functional tests
Mocking is generating pseudo-objects that simulate real objects behaviour for tests
The purpose of mocking types is to sever dependencies in order to isolate the test to a specific unit. Stubs are simple surrogates, while mocks are surrogates that can verify usage. A mocking framework is a tool that will help you generate stubs and mocks.
EDIT: Since the original wording mention "type mocking" I got the impression that this related to TypeMock. In my experience the general term is just "mocking". Please feel free to disregard the below info specifically on TypeMock.
TypeMock Isolator differs from most other mocking framework in that it works my modifying IL on the fly. That allows it to mock types and instances that most other frameworks cannot mock. To mock these types/instances with other frameworks you must provide your own abstractions and mock these.
TypeMock offers great flexibility at the expense of a clean runtime environment. As a side effect of the way TypeMock achieves its results you will sometimes get very strange results when using TypeMock.
I would think the use of the TypeMock isolator mocking framework would be TypeMocking.
It is a tool that generates mocks for use in unit tests, without the need to write your code with IoC in mind.
If your mock involves a network request, another alternative is to have a real test server to hit. You can use a service to generate a request and response for your testing.

Is JavaScript compatible with strict Page Object Pattern?

I have built various Test Automation frameworks using the Page Object Pattern with Java (https://code.google.com/p/selenium/wiki/PageObjects).
Two of the big benefits I have found are:
1) You can see what methods are available when you have an instance of a page (e.g. typing homepage. will show me all the actions/methods you can call from the homepage)
2) Because navigation methods (e.g. goToHomepage()) return an instance of the subsequent page (e.g. homepage), you can navigate through your tests simply by writing the code and seeing where it takes you.
e.g.
WelcomePage welcomePage = loginPage.loginWithValidUser(validUser);
PaymentsPage paymentsPage = welcomePage.goToPaymentsPage();
These benefits work perfectly with Java since the type of object (or page in this case) is known by the IDE.
However, with JavaScript (dynamically typed language), the object type is not fixed at any point and is often ambiguous to the IDE. Therefore, I cannot see how you can realise these benefits on an automation suite built using JavaScript (e.g. by using Cucumber).
Can anyone show me how you would use JavaScript with the Page Object Pattern to gain these benefits?
From Gerrit0's comment above and investigating it further, it seems a great way to achieve this is to use TypeScript (which is a statically typed version of JavaScript):
https://en.wikipedia.org/wiki/TypeScript
I am not much about this patterns.but i will give some details maybe it helps to you.
http://www.guru99.com/page-object-model-pom-page-factory-in-selenium-ultimate-guide.html
http://www.assertselenium.com/automation-design-practices/page-object-pattern/
It seems a great way to achieve this is to use TypeScript (which is a statically typed version of JavaScript):
https://en.wikipedia.org/wiki/TypeScript
If you use Jetbrains products like IntelliJ IDEA it will do a code completion and ther proper navigation for you. In javascript world page object is a known pattern too. AngularJs offers it too in it's own e2e test framework (http://www.protractortest.org/#/page-objects). Personally I use IIFE for page objects and IntelliJ does the rest. If it doesn't fit to your needs you can still choose typescript and transpile it to javascript.

On javascript software breaking down structure guidelines

I'm trying to refactor my old Colorchess (ChessHighlight) program. It's a chess board aimed to enhance influences of chessmen on each turn to help beginners understanding the game.
According to pressure balance on board at a given turn, tiles are colorized as follows :
green = no pressure
white = white player owns the tile
black = black player owns the tile
a color picked from a gradient green-yellow-orange-red : conflictual situation for this tile
AI's in project but for the moment I focus on making this game playable correctly along devices, in both situations : table gaming on tablet -or- network gaming.
I decide to code the client side in javascript, I love it ! and servers syncing will be in PHP, since my actual hosting environment is under.
My questions and though comes when I try to put all together :
(client-side libraries)
- RequireJS --> loading files
- KnockoutJS --> binding UI events
- ICanHaz --> templating
- Zepto --> DOM manipulating
- and maybe underscoreJS for utilities
I'm worry about making spaghetti code, difficult to understand and maintain.
In the old program, ChessHighlight, there was lots of interlaced construct declarations and prototype extensions, for example :
// file board.js
function Board() { ... }
function Tile() { ... }
// next included file :
function Chessman() { ... }
// again in a third included file
// since board and chessmen are defined
Tile.prototype.put(chessman) { ... }
Tile.prototype.empty() { ... }
due to nature highly-coupled of the game, each file inclusion in the stack carry more and more definitions, and code became messy...
A difficulty is that the game need transactional implementation since I did setters like :
// both... (think ACID commit in a RDBMS)
tile_h8.chessman = rook_white_1;
rook_white_1.tile = tile_h8;
Now I solve -partially- this issue by creating a "Object Relational Pool Manager" which is intended to store :
references to objects of any kind (Board, Chessman, Tile, Influence...)
the relations between objects
and appreciably some type checks and cardinality/arity summing
(I'm baking the code at this time)
SOME QUESTIONS :
How to write extensible code (elegantly, no classes and interface simulation, rather prototypes and closures) in a way that you define basic atoms of code : Tile, Board, Chessman in very short files, and then gluing them together in an other part : adding behavior wirh callbacks ?
NOTE : I try to read game engines code (Crafty, Quintus...) but Core of these engines (1600 lines of code), although they are well documented, are difficult to understand (where is the starting point ? where are runtime flows ??)
UML : I have the feeling that classical methodologies could rapidly fail with closures, callbacks and nested scopes, I seems to be instinctive to write and understand, but drawing dragrams seems to be a trick... what good JS developers use as a safety rope to climb 1500+ line-of-code peaks ?
and the last : I would have an engine API "jquery-like" to plug it easily into computed observables the KnockOut ViewModels of the GUI. Something like this
[code]
var colorchess = new Colorchess( my_VM_for_this_DIV_part );
colorchess.reset( "standard-game" );
colorchess("a1") --> a wrapper for "a1" tile
colorchess("h8").chessman() --> a wrapper for "h8" tile's chessman (rook)
// iterate on black chessman
colorchess("black").each( function( ref, chessman) {})
// a chainable example
colorchess("white").chessman("queen").influences()
[/code]
... but for for moment, I don't know exactly how to model, write and test those kind of mutable objects.
Suggestions are welcome. Thanks for help.
I don't think that defining your objects in constructor functions is bad. Defining them as closures is worse because it'll consume more resources and isn't as easily optimized by JS engines.
Tightly coupled objects is a problem you'll have with closures as well, you can use mediator pattern for that. Mediator can complicate your code but you can easily control the application flow after you set it up.
Having good tools for your JS project is important in big projects, I've tried some; like Google closure compiler with Eclipse, Visual Studio with typescript and now trying Dart (google Dart) with Dart editor (=Eclipse with right plugins). This can help you spot inconsistencies quickly and refactor easier. Typescript would be the only one where it would be easy to use JS libraries because Typescript is JS with optional extensions. Not sure how AngularJS is coming along with the Dart port but that is something worth looking at.
As for ACID, if you're talking about updating the sate of your client side object in a way that it reflect the data in the server's database. You can use promise instead of callbacks. That will improve readability of XHR updates. Write your PHP so you'll have update messages with the desired state of both the piece and the tile so it can change the data in a transaction.

How Can I Modify/"Spoof" Standard Browser JS DOM Objects (Window.location) at Runtime?

I'd like to dynamically change some of the standard JS DOM objects from within a web browser.
For instance, when I execute:
var site = location;
I want to specify a new value for my browser's "window.location" object other than the "correct" one (the URL used to access the requested page) at run time, either through a debugger-like interface or even programmatically if need be.
Although Firebug advertises the capability to do something similar via its "DOM Inspector," whenever I try to modify any of the DOM values while I've paused the Javascript via its debugger, it simply ignores the new value I enter. After doing some research, it seems that this is a known issue according to this bug report: http://code.google.com/p/fbug/issues/detail?id=1707 .
Theoretically, I could write a program to simply open up an HTTP socket and emulate a browser "user agent," but this seems like a lot of trouble for my purposes. While I'm asking, does anyone know a good Java/C# library with functions/objects that emulate HTTP headers and parse the received HTML/JS? I've long dreamt about the existence of such a library but most of the ones I've tried (Java's Apache HttpClient, C#'s System.Net.HttpWebRequest) are far too low-level to make anything worthwhile with minimal planning and a short period of time.
Thanks in advance for recommendations and advice you can provide!
Not sure if I understand you correctly, but if you want to change the loaded URL you can do that by setting window.location.href.
If your intent is to replace DOM buildins then you will be sad to hear, that most build-in objects (host objects) aren't regular JavaScript objects and their behaviour is not clearly defined. Some browsers may allow you to replace and/or extend some objects while in other browsers they won't be replaceable/extendable at all.
If you want to "script a browser" using JavaScript, you should definitly have a look at node.js and it's http module. There's also a thirdparty module called html5 that simulates the DOM in node.js and even allows the usage of jQuery.

Is there a tool to remove unused methods in javascript?

I've got a collection of javascript files from a 3rd party, and I'd like to remove all the unused methods to get size down to a more reasonable level.
Does anyone know of a tool that does this for Javascript? At the very least give a list of unused/used methods, so I could do the manually trimming? This would be in addition to running something like the YUI Javascript compressor tool...
Otherwise my thought is to write a perl script to attempt to help me do this.
No. Because you can "use" methods in insanely dynamic ways like this.
obj[prompt("Gimme a method name.")]();
Check out JSCoverage . Generates code coverage statistics that show which lines of a program have been executed (and which have been missed).
I'd like to remove all the unused methods to get size down to a more reasonable level.
There are a couple of tools available:
npm install -g fixmyjs
fixmyjs <filename or folder>
A configurable module that uses JSHint (Github, docs) to flag functions that are unused and perform clean up as well.
I'm not sure that it removes undefined functions as opposed to flagging them. though it is a great tool for cleanup, it appears to lack compatibility with later versions of ECMAScript (more info below).
There is also the Google Closure Compiler which claims to remove dead JS but this is more of a build tool.
Updated
If you are using something like Babel, consider adding ESLint to your text editor, which can trigger a warning on unused methods and even variables and has a --fix CLI option for autofixing some errors and style issues.
I like ESLint because it contains multiple plugins for alternate libs (like React warnings if you're missing a prop), allowing you to catch bugs in advance. They have a solid ecosystem.
As an example: on my NodeJS projects, the config I use is based off of the Airbnb Style Guide.
You'll have to write a perl script. Take no notice of the nay-sayers above.
Such a tool could work with libraries that are designed to only make function calls explicitly. That means no delegates or pointers to functions would be allowed, the use of which in any case only results in unreadable "spaghetti code" and is not best practice. Even if it removes some of these hidden functions you'll discover most if not all of them in testing. The ones you dont discover will be so infrequently used that they will not be worth your time fixing them. Dont obsess with perfection. People go mad doing that.
So applying this one restriction to JavaScript (and libraries) will result in incredible reductions in page size and therefore load times, not to mention readability and maintainability. This is already the case for tools that remove unused CSS such as grunt_CSS and unCSS (see http://addyosmani.com/blog/removing-unused-css/) and which report typical reductions down to one tenth the original size.
Its a win/win situation.
Its noteworthy that all interpreters must address this issue of how to manage self modifying code. For the life of me I dont understand why people want to persist with unrestrained freedom. As noted by Triptych above Javascript functions can be called in ways that are literally "insane". This insane fexibility corrupts the fundamental doctrine of separation of code and data, enables real-time code injection, and invalidates any attempt to maintain code integrity. The result is always unreadable code that is impossible to debug and the side effect to JavaScript - removing the ability to run automatic code pre-optimisation and validation - is much much worse than any possible benefit.
AND - you'd have to feel pretty insecure about your work to want to deliberately obsficate it from both your collegues and yourself. Browser clients that do work extremely well take the "less is more" approach and the best example I've seeen to date is Microsoft Office combination of Access Web Forms paired with SharePoint Access Servcies. The productivity of having a ubiquitous heavy tightly managed runtime interpreter client and its server side clone is absolutely phenomenal.
The future of JavaScript self modifying code technologies therfore is bringing them back into line to respect the...
KISS principle of code and data: Keep It Seperate, Stupid.
Unless the library author kept track of dependencies and provided a way to download the minimal code [e.g. MooTools Core download], it will be hard to to identify 'unused' functions.
The problem is that JS is a dynamic language and there are several ways to call a function.
E.g. you may have a method like
function test()
{
//
}
You can call it like
test();
var i = 10;
var hello = i > 1 ? 'test' : 'xyz';
window[hello]();
I know this is an old question by UglifyJS2 supports removing unused code which may be what you are looking for.
Also worth noting that eslint supports an option called no-unused-vars which actually does some basic handling of detecting if functions are being used or not. It definitely detects it if you make the function anonymous and store it as a variable (but just be aware that as a variable the function declaration doesn't get hoisted immediately)
In the context of detecting unused functions, while extreme, you can consider breaking up a majority of your functions into separate modules because there are packages and tools to help detect unused modules. There is a little segment of sindreshorus's thoughts on tiny modules which might be relevant to that philosophy but that may be extreme for your use case.
Following would help:
If you have fully covered test cases, running Code Coverage tool like istanbul (https://github.com/gotwarlost/istanbul) or nyc (https://github.com/istanbuljs/nyc), would give a hint of untouched functions.
At least the above will help find the covered functions, that you may thought unused.

Categories