I'm just mucking around a bit with HTML5 canvas and painting. I happened to find this article by Opera that describes well how to set up a simple application.
I would like to extend it so that the user may set up a vanishing point (1 point perspective) that may be used to constrain painting. To allow this I need to figure out a way to define some modifier keys to limit the result (i.e. constraints map as (key=>axis) a=>x, s=>y, d=>z).
Is there some way to check out which key the user has pressed while handling the "mousedown" event?
AFAIK it only will work when the document have focus.
You should add a listener to the body attending for the key press event, when it has triggered, you store it in a variable emptying it afterwards when user triggers key release, an example should be like this :
document.body.onkeydown=function(evt){evt=evt?evt:window.event;console.log(evt)};
document.body.onkeyup=function(evt){evt=evt?evt:window.event;console.log(evt)};
then you only should identify evt.keyCode and act with it.
You could try third party libraries like shortcut.js too.
Related
I'm working on a ReactJS app in which I'm using EaselJS to handle multiple canvases, on same page I have to add and remove different canvases on the base of different conditions to render different views. Even after removing a canvas using following code to dispose a canvas component
createjs.Touch.disable(this.stage);
this.stage.removeAllChildren();
this.stage.removeAllEventListeners();
this.stage.enableDOMEvents(false);
some events are being triggered. After using application for some time it start to use a lot of processing and memory. After having a look at performance tab in developer tools in chrome, I came to know a timer event is being called for every canvas which ever was added. After inspecting code I come to know that
this.stage.enableMouseOver();
is setting an setInterval timer which is not being removed even after calling all above code and I can't find any way to remove it.
Can anyone please help me to get rid of it.
Thanks in advance
The enableMouseOver method is documented to both add and remove the functionality from a Stage. By passing 0 as the frequency, the interval should be cleared.
stage.enableMouseOver(0);
From the documentation:
Enables or disables (by passing a frequency of 0)
and
frequency: Optional param specifying the maximum number of times per second to broadcast mouse over/out events. Set to 0 to disable mouse over events completely.
I did a quick pass on the code, and it definitely removes the interval.
I have two questions about Crafty (I've also asked in their google group community, but it seems very few people look at that).
I've followed this tutorial http://buildnewgames.com/introduction-to-crafty/ and also took a look at the "isometric" demo in crafty's website of a bunch of blocks (http://craftyjs.com/demos/isometric/). And I've been trying some stuff by combining what I've learned in both.
(Q1) When I use the fourway component (used in the tutorial a lot), if I hold the left arrow key for example and CTRL-TAB out the current tab while holding left, and then go back (not necessarily holding left anymore), then the my character seems to get stuck in moving to the "left" direction. It also happens for the other 3 directions. Is that a known issue? Is there anyway to fix it without changing crafty?
It happens here with firefox 29 and chrome 34. My code is pretty much the one in the final version presented at the tutorial's end (it's not the same, but even when it was the same I already had this issue).
By the way, when this happens, if I CTRL-TAB out and back again holding that left key, things go back to normal (the movement stops).
(Q2) The isometric-ish features interprets Z as being height, and the gravity component uses Y for height. Isn't this a problem? Can I, maybe, for example, tell gravity to use something else, other than y, for height?
Regarding (Q1), the movement is managed by keydown and keyup events. If you change the tab when the start of a movement was triggered, the fourway component never gets any keyup event to stop again. You could use a workaround like the following:
Crafty.settings.modify("autoPause", true);
Enabling autoPause (somewhere in your init function) will pause your game when the browser tab crafty is running in is inactive. You can then react to this event by triggering keyup events or preventing the player component to move like this:
player.bind('Pause', function() {
this.disableControl();
});
player.bind('Unpause', function() {
this.enableControl();
});
You might want to stop animation there too if you handle that in your player component..
I'm working on a javascript framework for creating simple animations on an html canvas with nested sprites using a basic composite pattern.
I've been modeling my work on Clutter and Flash (very similar structure). A "Stage" holds all of the items on screen, which are "DisplayObjects". These can be aggregated in a "DisplayObjectContainer", which inherits from "DisplayObject". The "Stage" itself is also a "DisplayObjectContainer". All of these inherit from an "EventDispatcher".
I've spent the better part of the last few days reading about the event flow of these systems and searching for examples in various open source projects.
From what I understand, when an event is dispatched, it should follow a certain propagation path: it flows from the stage, into the display object hierarchy (the "capture" phase) until it reaches the "target" of that event, and then "bubbles" back up the display hierarchy. If this isn't clear enough, the images located here should help explain:
http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7e4f.html
http://docs.clutter-project.org/docs/clutter/1.4/event-flow.png
There is an aspect of this that I'm failing to understand, and I can't tell if it's just me or if this is as unclear as I think it is:
Suppose I'm dealing with clicks. I click on the display and use the browser's native event handling to retrieve the x/y coordinates of the click, and then send that down the display hierarchy to determine which object I've clicked.
Until now, this WAS the "capture" phase in my code. But this is completely at odds with the documentation which says the target should already be attached to the event by the time it enters the event flow.
Am I really supposed to traverse my graph of display items twice?
Any advice or expertise on the issue would be highly appreciated.
Interesting question! Yes, I believe you would need to traverse your DisplayList first to calculate the event target before beginning the capture phase of your event-flow. Never having designed an event system, I'm not completely sure about this, but perhaps when you calculate the target object you could cache the hierarchical route and use that as the basis of your event-flow rather than traversing the DisplayList again.
The bit that you're unclear on, I think, is more obvious if you consider it in terms of implementation rather than in the abstract of designing an event system (and the terminology of existing event systems). Imagine, a widget consisting of a parent object, and a number of child items which need to react to mouse clicks. You might decide you want to listen for events on the parent object only, but react according to the target object from which the event originated. In ActionScript, if you're using the capture phase of the event-flow, your handler would fire before the target of the event is reached but, in this case, the target is an essential property of the event object.
As suggested in the comments, it might be worth looking at the source code for easeljs since it claims to provide an API "that is familiar to Flash developers". However, note that easeljs does not currently support a full-featured event flow for performance reasons (see here).
My two pennies; event-flow is tricky enough to understand (let alone design) and implementing a full-featured event system may be at odds with your goal of creating a light-weight library. I would suggest you keep it simple at this stage and only add features such as event bubbling if you find you need them.
looking at the (quite outdated) documentation on the apple website and across the internet for the touch and gesture javascript events, I can't help but notice that the gesture event which is returned during the function being called, only contains the values 'scale' and 'rotation'. Have I missed something or are they the only two values returned?
It's hard to see the values of a returned object on the iPhone as they are annoyingly only shown as their type and name. e.g when logging the returned object in the console, it just shows up as '[object TouchEvent]'.
What I am trying to achieve is to run a function once a two finger swipe occurs, then whether the swipe was left or right, change the page accordingly.
I have tried a more complex touchevent and the (what I thought would be easier) gestureevent, but the gestureevent only returns the scale and rotation (apparently), and the touchevent seems quite complex in the way I am doing it.
Does anyone know if the object returned is just the scale and rotation, and if so, do you know how I can get the same effect with the touchevent instead?
You should use touchevent that provides you trajectory [of movements] to detect is this trajectory falls into "swipe" category using your own definition of swipe.
Usually "swipe" gesture is context specific: e.g. it should start in particular DOM element and probably end there. It means that for particular DOM element some gesture is not a "swipe" but for its container for example it is a swipe. So you cannot generate bubbling event for that in general.
Zoom and rotation gestures can be detected without knowing context - so system generates them for you.
There are ready to use frameworks and libraries that have swipe gesture detectors. At least for some popular containers like items in vertical list and so on.
Mozilla Firefox 3.x seems to have a bug when listening to the "ondrag" event. The event object doesn't report the position of the object being dragged, clientX, clientY, and other screen offsets are all set to zero.
This is quite problematic as I wanted to make a proxy element based on the element being dragged and using of course, clientX and clientY to adjust its position.
I know that there's cool stuff around such as setDragImage in HTML5 but I want to provide a generic abstraction for native DD between browsers.
Faulty code:
document.addEventListener('drag', function(e) {
console.log(e.clientX); // always Zero
}, false);
Note:
This problem doesn't happen on other events (dragstart, dragover) and the mousemove events cannot be captured while dragging something.
I found a solution, I've placed a listener on the "dragover" event at the document level, now I get the right X and Y properties that I can expose through a globally shared object.
The drag event in HTML 5 is not fully functional in todays browsers. To simulate a drag n drop situation you should use:
Add a onmousedown event, setting a var true.
Add a onmouseup event, setting that var false.
Add a onmousemove event, checking if that var is true, and if it is, move your div according to the coordinates.
This always worked for me. If you face any problems, get in touch again, I can provide some examples.
good luck!
I know that there's cool stuff around
such as setDragImage in HTML5 but I
want to provide a generic abstraction
for native DD between browsers.
But why do something like this, aren't there libraries like JQuery & Prototype available for cross browser drag & drop?
Or else if you want to implement a DD library of your own, you can take help of their methods or extend them, as both the libraries are following object oriented paradigm.
This will save much time.