I'm building a page that will have various 'sections' that the user will pass through, each with their own logic. For instance:
Loading section: loading blinker. When finished, continue to:
Splash section: an introduction with some UI elements that need to be interacted with to continue. (let's just say there's a 'slide to unlock')
Video premiere: a custom player using the Youtube embed Javascript API.
Since some of these sections have a lot of specific logic, I'm separating these out into components. Almost all of this logic is internal to the component, but occasionally, I'd like to call a function within component A from component B. See the latter lines of main.js and Splash.js
main.js
import $ from 'jquery';
import Loading from './components/Loading.js';
import Splash from './components/Splash.js';
import Premiere from './components/Premiere.js';
$(() => {
Loading.init();
Splash.init();
Premiere.init();
Splash.onUnlock = Premiere.playVideo;
});
Loading.js:
const Loading = {
init() {
// watches for events, controls loading UI, etc.
},
// ... other functions for this section
}
export default Loading;
Splash.js
const Splash = {
init() {
// watches for events, controls unlocking UX, etc.
},
// ... other functions for this section
unlock() {
// called when UX is completed
this.onUnlock();
}
}
export default Splash;
Premiere.js
const Premiere = {
init() {
this.video = $('#premiereVideo');
// watches for events, binds API & player controls, etc.
},
// ... other functions for this section
playVideo() {
this.video.play()
},
}
export default Premiere;
Now, I'd expect that when this.onUnlock() is called within Splash, Premiere.playVideo() would be triggered. But, I get an error: video is not defined - because it's looking for Splash.video, not Premiere.video.
From what I understand, assigning an object or its property to a variable creates a reference to that property, not a duplicate instance. It seems I'm not understanding this correctly.
Changing this.video.play() to Premiere.video.play() works, but I feel like I'm still missing the point.
What's up?
(Possibly related sub-question: would I benefit from defining these components as classes, even if they're only going to be used once?)
So to answer your question why do you get video is not defined is because you are trying to access this which has changed contexts.
Premiere.playVideo.bind(Premiere)
Where the bind will make sure that when playVideo is called, it is called in the context of premiere rather than the context of splash. This means the context of premiere has this.video.
The code I used to verify:
const Splash = {
init() {
// watches for events, controls unlocking UX, etc.
},
// ... other functions for this section
unlock() {
// called when UX is completed
this.onUnlock();
}
}
const Premiere = {
init() {
this.video = {
play() {
console.log("playing");
}
};
// watches for events, binds API & player controls, etc.
},
// ... other functions for this section
playVideo() {
console.log(this);
this.video.play()
},
}
Premiere.init();
Splash.onUnlock = Premiere.playVideo.bind(Premiere);
console.log(Splash);
Splash.unlock();
However, this particular "architecture" is a bit smelly to me. You could use the chain of responsibility pattern. This is where the current object knows what to call next after it has done it's work.
class DoWork {
constructor(nextWorkItem) {
this.nextWorkItem = nextWorkItem;
}
doWork() {
}
}
class LoadingComponentHandler extends DoWork {
constructor(nextWorkItem) {
super(nextWorkItem);
}
doWork() {
// do what you need here
console.log("Doing loading work")
// at the end call
this.nextWorkItem.doWork();
}
}
class SplashComponentHandler extends DoWork {
constructor(nextWorkItem) {
super(nextWorkItem);
}
doWork() {
// do what you need here
console.log("Doing Splash Work")
// at the end call
this.nextWorkItem.doWork();
}
}
class PremiereComponentHandler extends DoWork {
constructor(nextWorkItem) {
super(nextWorkItem);
}
doWork() {
// do what you need here
console.log("Doing Premiere Work")
// at the end call
this.nextWorkItem.doWork();
}
}
class FinishComponentHandler extends DoWork {
constructor() {
super(null);
}
doWork() {
console.log("End of the line now");
}
}
var finish = new FinishComponentHandler();
var premiere = new PremiereComponentHandler(finish);
var splash = new SplashComponentHandler(premiere);
var loading = new LoadingComponentHandler(splash);
loading.doWork();
The FinishComponent is part of the Null Object Pattern, where its implementation does a noop (no operation). This effectively ends the line of responsibility. Of course you don't need a FinishComponent, you can just not call the this.nextWorkItem.doWork() and the chain will end there. I have it there because it is easier to see where the chain stops.
You can see from the last four lines that the chain of responsibility is easy to see:
var finish = new FinishComponentHandler();
var premiere = new PremiereComponentHandler(finish);
var splash = new SplashComponentHandler(premiere);
var loading = new LoadingComponentHandler(splash);
The loading component will call doWork on the splash object which will in turn call doWork on the premiere object, so on, so fourth.
This pattern relies on the inheritence of DoWork which is the kind of interface for the handler.
This probably isn't the best implementation, but you can see how you don't have to worry about the last thing that was called, or how to specially call the next. You just pass in the object you wish to come next into the constructor and make sure you call at the end of your operations.
I noticed you had
// watches for events, controls unlocking UX, etc.
The doWork() functions can execute the bindings, delegating them to the proper components that deal with this. So like SplashComponentHandler can delegate off to SplashComponent. It's good practise to keep these separations of concerns.
How this addresses your issue
Splash.onUnlock = Premiere.playVideo.bind(Premiere);
Firstly, Splash.onUnlock has no implementation until you give it one. Secondly, the fact you're having to bind a context to your function because it is getting executed under a different context doesn't sound good.
So you can imagine in SplashComponentHandler.doWork():
doWork() {
var component = new SplashComponent();
component.initialise(); // when this is finished we want to execute the premiere
this.nextWorkItem.doWork();
}
And in PremiereComponentHandler.doWork():
doWork() {
var component = new PremiereComponent();
component.bind(); // makes sure there is a this.video.
component.playVideo();
}
See now that SplashComponentHandler now has no knowledge of the next handler, it just knows that when it has finished its job, it needs to call the next handler.
There is no this binding, because doWork() has been executed in the context of PremiereComponentHandler or what ever handler was passed to SplashComponentHandler.
Furthermore
Technically speaking, you're not limited to executing one handler after another. You can create a handler that executes many other handlers. Each handler that gets executed will have knowledge of the next one until you stop calling into.
Another question is, what happens if once premiere is done doing its work, how can splash do something else afterwards?. Simple, so working from the previous scenario of decoupling, this is SplashComponentHandler.doWork():
doWork() {
var component = new SplashComponent();
component.initialise(); // when this is finished we want to execute the premiere
this.nextWorkItem.doWork();
// so when we get to this execution step
// the next work item (PremiereComponentHandler)
// has finished executing. So now you can do something after that.
component.destroy(); // cleanup after the component
fetch('api/profile') // i don't know, what ever you want.
.then(profileContent => {
component.splashProfile = profileContent.toJson();;
});
}
On that last note of using a Promise you can make the whole doWork() async using promises. Just return the this.nextWorkItem.doWork() and then the initialisation steps look like this:
var finish = new FinishComponentHandler();
var premiere = new PremiereComponentHandler(finish);
var splash = new SplashComponentHandler(premiere);
var loading = new LoadingComponentHandler(splash);
loading
.doWork()
.then(() => {
// here we have finished do work asynchronously.
})
.catch(() => {
// there was an error executing a handler.
});
The trick to making it all use Promises is to make sure that you always return a promise from doWork().
Related
Are all setIntervals cleared on scene change in Phaser 3? For example, if I have this code:
class ExampleScene extends Phaser.Scene {
constructor () {
super();
}
preload () {
}
create () {
setInterval(() => console.log(true), 1000);
}
update () {
}
}
and I change scenes, will it continue to log true to the console? Is there an alternative in Phaser that doesn't require that I remove all intervals manually?
The short answer is, No. Since setInterval is a javascript function.
For details on the function: here in the documentation on mdn
What you can do is "save" the setInvertal calls/id's in a list and clear them on a specific scene event like, shutdown, pause, destroy ... . When that event fires you can then stop all saved intervals, or so (details to possible Scene events).
This is also considered good practice, since you always should cleanup resources, when leaving a phaser scene.
Example (here with the shutdown event):
...
// setup list for the intervals, that should be cleared later
constructor () {
super();
this.intervals = [];
}
create () {
...
// example of adding an interval, so that I can be cleanup later
this.intervals.push(setInterval(() => console.log(true), 1000));
...
// example listening to the shutdown Event
this.events.on('shutdown', this.stopAllIntervals, this);
}
...
// example "cleanup"-function, that is execute on the 'shutdown' Event
stopAllIntervals(){
for(let interval of this.intervals){
clearInterval(interval);
}
}
...
And now you just hat to call the stopAllIntervalsin the desired event function, when you want to stop them all.
From the offical documenation, on the shutdown Event: ... You should free-up any resources that may be in use by your Scene in this event handler, on the understanding that the Scene may, at any time, become active again. A shutdown Scene is not 'destroyed', it's simply not currently active. Use the DESTROY event to completely clear resources. ...
I am trying to delay when the first picture is clicked because it is firing off before it enters the screen maybe I need to put it in an if else statement?
// Instagram hacks
// Search field
// let Searchtest= prompt("Please enter the hashtag you want to like","Trending");
// var search = document.querySelector('.x3qfX').value = "#" + Searchtest;
document.querySelector(".glyphsSpriteSafari__outline__24__grey_9").click();
let firstPicture = document.querySelector("div._9AhH0");
firstPicture.click();
let likesGiven = 0;
setInterval(() => {
let heart = document.getElementsByClassName(
"glyphsSpriteHeart__outline__24__grey_9"
),
arrow = document.querySelector(".coreSpriteRightPaginationArrow");
if (heart[1]) {
heart = heart[1].parentElement;
likesGiven++, heart.click();
}
arrow.click();
console.log(`You've liked ${likesGiven} post(s)!`);
}, 2000);
// Button Liker
My Last Attempt Run this in your console from instagrams homepage and you u will see what
i mean
document.querySelector(".glyphsSpriteSafari__outline__24__grey_9").click();
let firstPicture = document.querySelector("div._9AhH0");
if (firstPicture){
firstPicture.click();
}
Ok here is it: maybe you should wait for the document to get loaded, seems like you can do it with DOMContentLoaded evenListener and then in the onready callback you can execute your click function : See below example
DOM has not changed in ES6, ES6 gives new features to JavaScript, that is all. In pure js exists event for dom loaded it is document ready from jquery equivalent
document.addEventListener("DOMContentLoaded",function(){ //do something here });
Modules working with DOM tree can have listener inside, or should be used after dom is ready. I created example DOM function to show what I mean:
var DOM=function(selector){
document.addEventListener("DOMContentLoaded",()=>{
this.element=document.querySelector(selector);
if (typeof this.callback === 'function')
this.callback();
});
};
//HERE WE HAVE CALLBACK WHEN OUR MODULE CAN BE USED
DOM.prototype.onReady=function(callback){
this.callback=callback;
};
DOM.prototype.getElement=function(){
//example object method
return this.element;
};
DOM.prototype.click=function(){
return this.element.click
};
Usage example:
document.querySelector(".glyphsSpriteSafari__outline__24__grey_9").click();
var d=new DOM("div._9AhH0");
firstPicture.onReady(()=>{
firstPicture.click();
});
//your other code
Modules should be DOM independent, creating modules which are exporting DOM elements directly are very wrong practice. So it can be done in two ways:
Modules should get selectors DOM object in attributes and should be called after DOM is ready. So Your module has no idea where is called, but it needs ready DOM structure. In this situation DOM ready callback is only in main file which is using modules and call them.
Modules can have some DOM ready listeners but also We need some information when module can be used ( this situation I showed in example and onReady function).
You might try a while loop that sleeps for a little bit and then checks to see if your required element has appeared in the dom.
Add this sleep function.
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
Now put this bit of code at the start of your method where firstPicture isn't found. This will make the script wait a 10th of a second if it doesn't find the element and then tries it again. Once it finds the element, your code continues as expected.
while( null == document.querySelector("div._9AhH0") ) {
sleep( 100 );
}
I am very new to JavaScript and NodeJS, I was just trying to understand the emitter pattern in NodeJS. When I try to emit a tick event every second, using the setInterval function, the program seems to be working fine:-
var util = require('util'),
EventEmitter = require('events').EventEmitter;
var Ticker = function() {
var self = this;
setInterval(function() {
self.emit('tick');
}, 1000);
};
util.inherits(Ticker, EventEmitter)
var ticker = new Ticker();
ticker.on('tick', function() {
console.log('TICK');
});
But, when I try to emit an event without using the setInterval method, my event is not being called:-
var util = require('util'),
EventEmitter = require('events').EventEmitter;
var Ticker = function() {
var self = this;
self.emit('tick');
};
util.inherits(Ticker, EventEmitter)
var ticker = new Ticker();
ticker.on('tick', function() {
console.log('TICK');
});
Please help, I don't understand, where I am wrong...
As far as my understanding, when self.emit is called, ticker.on is not registered, and hence the event is missed. If this is the case, how do I emit an event when an object is created?
JavaScript is a (mostly*) synchronous language, unless otherwise specified, code runs from top to bottom, and only asynchronous events are queued for later.
Without the setInterval queuing the emit() for later, you have something like this:
create Ticker
Ticker.emit()
Ticker.on(...)
So basically, the .emit() happens synchronously, and before the first call to .on().
*Mostly because with ES2015 we have Promises, which are a language-level construct for describing something asynchronous, that's not important, however, for the problem you're observing.
how do I emit an event when an object is created
You already know the answer. Just listen to that event before triggering it. There is no other solutions. Asynchronously firing emit is just a messier one. And also, I don't recommend writing ES5 on nodejs.
let Event = require('event');
class Ticker extends Event{
constructor(){
super();
this.on('tick', () => {
console.log('TICK');
});
this.emit('tick');
}
}
new Ticker();
// or better
class Ticker extends Event{
constructor(){
super();
}
}
var ticker = new Ticker();
ticker.on('tick', () => {
console.log('TICK');
});
ticker.emit('tick');
The fact is that you need someone to emit the event.
It can be a function, setInterval, setTimeout etc..
The usage of event emitter is only to bind all those functions,
to be called when a given event is emitted by someone..
Hence you will always need someone to emit events.
I'm working on an interactive tutorial-tool for JavaScript. The core of the tool is the script of the tutorial. The script will trigger various functions that run animations, speaker-voices load new pages etc. Three sample calls(most tutorials will have 10-100s of calls, so a neat overview of the calls is highly desired:
wrap(); //wrap the page in an iframe
playsound('media/welcome') //playing a sound (duh)
highlight('[name=firstname]'); //animation that highlights an element.
playsound('media/welcome2');
loadpage(page2); //loading a new page
All calls have something in common: they have non-normal-triggers. In this simple script for example, the second call should be triggered once the iframe in the first call is loaded. The third script is triggered once the sound is complete (ie delay). The fourth function should be triggered once the animation is complete. The fifth event should be triggered on an event (for example a click).
A technical solution to this would be to call the function in the callback of the previous function, this has the potential to get pretty messy. What I like with a solution wherer the functions are called lite this is that someone with a little bit of brains, but no coding experience could hammer up a script of their own. How would you solve this? I'm pretty new to javascript so if you could be explicit i'd appreciate it.
I'd use a per-built solution. There is bound be one that fits your needs. Something simple like jTour or if that doesn't cover it something a little more complex like Scriptio. Some of the answers to this question may also be of interest to you.
Edit
If you don't want to use a preexisting solution, I'd do something like this:
var runTutorial = (function () {
// The command object holds all the different commands that can
// be used by someone for the tutorial. Each of these commands
// will recive a callback set as their `this`. This
// callback should be called by your commands when they are done
// running. The person making the tutorial won't need to know
// about the callback, the code will handle that.
var commands = {
wrap: function () {
//wrap the page in an iframe
this();
},
playsound: function (soundPath, soundLength) {
//playing a sound (duh)
setTimeout(this, soundLength);
},
highlight: function (selector) {
//animation that highlights an element.
//I'm using jQuery UI for the animation here,
// but most animation libraries should provide
// a callback for when the animation is done similarly
$(selector).effect('highlight', 'slow', this);
},
loadpage: function (pageUrl) {
//loading a new page
setTimeout(this, 500);
},
waitForClick: function () {
// when we go into the click handler `this` will no
// longer be availble to us since we will be in a
// different context, save `this` into `that` so
// we can call it later.
var that = this;
$(document).one('click', function () {
that();
});
}
},
// This function takes an array of commands
// and runs them in sequence. Each item in the
// array should be an array with the command name
// as the first item and any arguments it should be
// called with following as the rest of the items.
runTutorial = function (commandList) {
var nextCommand = function () {
if (commandList.length > 0) {
var args = commandList.shift();
// remove the command name
// from the argument list
cmd = args.shift(1);
// call the command, setting nextCommand as `this`
commands[cmd].apply(nextCommand, args);
}
}
nextCommand();
};
return runTutorial;
}());
$('#tutorialbutton').click(function() {
runTutorial([
['playsound', 'media/welcome', 1000],
['highlight', '[name=firstname]'],
['playsound', 'media/welcome2', 1500],
['waitForClick'],
['loadpage', page2],
['playsound', 'media/page2', 100]
]);
});
The runTutorial function takes a simple array containing the commands in the order they should be run, along with their parameters. No need to bother the person writing the script with callbacks, runTutorial handles that for them. This has some big advantages over a system that requires the writer to manage callbacks. You don't need an unique name for each line in the script as you do with explicit callbacks, nor endless nesting of anonymous functions. You don't need to rewire anything to change the order that the commands are played in, you just physically rearrange them in the array.
jsfiddle you can play with
Each of your commands will need to wait for its action to be done before it calls its callback (aka this). I simulate this in the fiddle using setTimeout. For instance, if you are using jQuery's .animate for highlight, it provides a complete handler that fires when the animation is done, just stick this (with out the invocation parentheses ()) there. If you are using jQuery UI, it has a built-in 'highlight' effect, so you could implement it like this:
highlight: function (selector) {
//animation that highlights an element.
$(selector).effect('highlight', 'slow', this);
},
Most other libraries that provide animations should provide a similar callback option you can use.
Controlling the callback for the sounds may be harder depending on how you are playing them. If the method you are using doesn't provide a callback or a way of polling it to see if it is done yet you might just have to add another parameter to playsound that takes the length of the sound in ms and then waits that long before proceeding:
playsound: function (soundPath, soundLength) {
//playing a sound (duh)
setTimeout(this, soundLength);
},
Callbacks are your best bet, I think. They don't have to be messy (though it's certainly possible to make them completely incomprehensible). You could create each function to accept a callback, then use a structure like this to call them in sequence in a readable way:
var loadingSequence = {
start : function() { wrap(this.playsound); },
playsound : function() { playsound('media/welcome', this.highlight); },
highlight : function() { highlight('[name=firstname]', this.playsound2); },
playsound2 : function() { playsound('media/welcome2', this.loadpage); },
loadpage : function() { loadpage(page2); }
};
loadingSequence.start();
I've got a sequence of Javascript function calls in a function I have defined to be executed when a web doc is ready. I expected them to be executed in sequence, as one ends the next begins, but the behaviour I see doesn't match up with that.
Additionally there is manipulation of the graphical components going on in between the calls (for example, I add in a checkpoint time to draw on a div on the page inbetween each of the mentioned calls) but those redraws aren't happening in sequence... they all happen at once.
I'm a bit of a n00b with the whole javascript-in-the-browser thing, is there an obvious mistake I'm making, or a good resource to go find out how to do this stuff?
Update - sample
// called onReady()
function init() {
doFirstThing();
updateDisplayForFirstThing();
doSecondThingWithAjaxCall();
updateDisplayForSecondThing();
...
reportAllLoaded();
}
IE won't update the display until the current script is finished running. If you want to redraw in the middle of a sequence of events, you'll have to break your script up using timeouts.
If you post some code we can help refactor it.
edit: here's a general pattern to follow.
function init() {
doFirstThing();
updateDisplayForFirstThing();
}
function updateDisplayForFirstThing() {
// existing code
...
// prepare next sequence
var nextFn = function() {
// does this method run async? if so you'll have to
// call updateDisplayForSecondThing as a callback method for the
// ajax call rather than calling it inline here.
doSecondThingWithAjaxCall();
updateDisplayForSecondThing();
}
setTimeout(nextFn, 0);
}
function updateDisplayForSecondThing() {
// existing code
...
// prepare next sequence
var nextFn = function() {
// continue the pattern
// or if you're done call the last method
reportAllLoaded();
}
setTimeout(nextFn, 0);
}
This can be fixed for many cases by using callbacks, especially with AJAX calls -- for example:
function doFirstThing(fn){
// doing stuff
if(typeof fn == 'function') fn();
}
function updateDisplayForFirstThing(){
// doing other stuff
}
function init(){
doFirstThing(updateDisplayForFirstThing);
}
Another option is to use return values:
function doFirstThing(fn){
// doing stuff
if(x) return true;
else return false;
}
function updateDisplayForFirstThing(){
// doing other stuff
return true;
}
function init(){
if(doFirstThing()){ updateDisplayForFirstThing(); }
}
setting timeouts to step through your code is not really a good way to fix this problem because you'd have to set your timeouts for the maximum length of time each piece of code could possibly take to execute.
However, you may still sometimes need to use a setTimeout to ensure the DOM has properly updated after certain actions.
If you end up deciding that you would like some JavaScript threading, check out the still being drafted Web Workers API. Browser support is hit and miss though the API is implemented in most modern web browsers.
Question: exactly how did you go about determining when the "doc is ready"? The DOMContentLoaded event isn't supported in IE I'm fairly certain... if you're in need of waiting for your document to load in its entirety you could use something like this:
var onReady = function(callback) {
if (document.addEventListener) {
document.addEventListener("DOMContentLoaded", callback, false);
return true;
} else if (document.attachEvent) {
var DOMContentLoaded = function() {
if (document.readyState === "complete") {
document.detachEvent("onreadystatechange", DOMContentLoaded);
onReady();
}
};
return true;
}
};
Then of course you'll need to develop a setTimeout testing for some flags state indicating the page is loaded upon completion before continuing the execution of the rest of your code... that or any number of other methods...
Or you could just include the script at the bottom of your body...
I'm just rambling though until you have some code to show us?