I want to track objects in dragging and dropping to have info about from where the object is dragged and to where it is dropped, how can I make that possible?
The actual cause is that I'am developing a game where I should track the source and destination points, I should verify success only if the any shape is taken and placed from and to a particular place only.
raycaster gives a intersection array but that is to intersection from cursor, not any other object.
This page from script tutorials can be taken for example : https://www.script-tutorials.com/demos/467/index.html
Related
So I have a WebGL model viewer (one model made of lots of different objects), in which I have implemented a lasso/marquee selection box. That is, the user can click to drag a rectangle on the screen, and everything inside the rectangle gets selected. It selects not just visible objects, but everything in the model that is within the bounds of the rectangle.
I have to do this for 2 cases: partial and full selection.
Full Selection: only objects that are completely inside the rectangle will get selected.
Partial Selection: any object that has any overlap in the lasso rectangle, will get selected.
Now, I have full selection working just fine. The problem is with partial selection. Everything I've done is too slow. I don't have a spatial index/partition (octree or BVH, etc), so my next test is to create one to help more quickly cull out things that won't get selected.
In general, though, it seems at some point, I'm stuck with iterating through all the triangles of an object to see if any part of it overlaps with the lasso rectangle on screen. In some cases, this will be very slow if the lasso selection is dragged to be large enough, such that no objects get culled out, and all must be checked for intersection with the rectangle.
Right now, I'm doing all rectangle/triangle intersection tests in JS (various ways of checking for intersection of an object with the rectangle), and no test is happening in a shader.
So I'm not really looking for help to speed up what I currently have, I am wanting to know if there are known ways of doing this that are relatively fast even for large numbers of highly detailed objects (meshes)? I seemed to hit a wall, creatively, in thinking of a shader based approach.
Gridster.js allows to drag and drop tiles of variable size and rearrange tiles as we drag one tile.
Does anyone know the algorithm for rearranging the tiles?
I want to make this type of grid layout in action script and I am not much familiar with Javascript, I have already tried to read the code but don't understand most of the things, so if someone can give me the algorithm I will try to implement logic using action script.
i tired a little bit more and made a grid layout as per my requirement.
here is a demo link
i'll try to write an abstract algorithm for updating this type of tile layout,
As the time Drag is Started:
Calculate new drop location for the dragged item and move the drag item to the new location.
Check if their is any collision after moving.
if collision , fix the collision(i.e. move the collision tile down) go to step 2.
Update other tiles positions.(i.e check if any tile can be move upward if free space is available)
Well.. so it's the same problem that I face with Kinetic.Group. I tried several things to make this work, but always the same result: it does not work, and even worse, the basic drag and drop functionality disappears!
I already know how to do this inside one container, the difficulty comes when I try to adapt it to the drag and drop from DOM to container. After the drop, I need the images and shapes to be moved all together. That's why I created a group for each item and made it draggable.
This is the fiddle I'm changing to make the dropped elements in the canvas, be draggable as a whole group (the non-working fiddle) http://jsfiddle.net/gkefk/15/. What is wrong with this code?
PS1: this one is the main functionality of drag and drop http://jsfiddle.net/gkefk/14/ which I'm editing.
PS2: I'm a beginner, so if you find "stupidities" in that code, please report.
A simple guide how to get what you want out of this:
Get rid of jQuery and start over.
1. Create a new Stage
2. Create two layers, one taking up the left half of the Stage, the other the right half.
3. put all your objects on the left side, make them clone-able on mousedown and fire the drag event so you can place them in the other layer on mouse up.
4. if your item is a rectangle - I'm assuming this is a group which is will have children, create a new group, and a rectangle inside it, place it in the right layer on drop.
5. if your item is a house, check for mouse intersection with a rectangle, if mouse is over a rectangle, get the parent of the rectangle on drop (which will be a group), and then place the house in that group, else place in right layer freely.
I have a large set of rectangles that are drawn on html5 canvas.
I would like to be able to interact with this image using mouse tracking (I cannot use SVG because it does not scale to 10-100k rectangles). Is there any data structure/algo that, given the mouse x,y coordinates would be able to tell you which box the mouse was over (using the computed locations of the rectangles)? I was thinking something like a k-d tree but was not sure.
If your data is always of the form shown I think you should be able to do better than a spatial tree data structure.
Since the data is structured in y you should be able to calculate which 'strip' of rectangles the points is in based on offsets in O(1) time.
If you store the individual rectangles within each 'strip' in sorted order (using xmax say) you should then be able to locate the specific rectangle within the strip using a binary search (in O(log(n))).
Hope this helps.
The naive solution would be to iterate over all rectangles and check if you are in it. Checking for a single rectangle is easy (if you want I will write it down explicitly).
If you have many rectangles and care about performance, you can easily speed things up by putting the rectangles in a data structure which is faster to look in. Looking at the image you sent, one obvious property of your data is that there is a limited amount of vertical positions ("rows"). This means that if you check which row you are on, you then only need to check rectangles within that row. Finally, to select which row you are on or within a row select which rectangle, keep a sorted data structure (e.g. some search tree).
So your data structure could be something like a search tree for row, where each node holds a search tree of rectangles along the row.
R-tree is suitable for providing spatial access of this kind
But some questions:
Is your rectangle set static or dynamic?
How many point queries do you expect for rectangle set?
Edit:
Because rectangle set is static:
There is method, used in traditional graphics with bitmaps (don't know is it applicable to html5 canvas or not):
Make additional bitmap with the same size as main picture. Draw every rectangle with the same coordinates, but fill it with unique color (for example, color = rectangle index). Then check color under mouse point => get rectangle index in O(1)
I'm working on an app that displays a large image just about the same way as Google Maps. As the user drags the map, more images are loaded so that when a new part of the map is visible, the corresponding images are already in place.
By the way, this is a Javascript project.
I'm thinking of representing each tile as a square div with the image loaded as a background image.
My question: how exactly can I calculate what divs are showing, and when the tiles are moved, how do I tell when a new row of divs have become visible?
Thanks!
About calculating what divs are showing: learn the algorithm for intersecting two rectangles (the stackoverflow question Algorithm to detect intersection of two rectangles? is a good starting point). With that, the divs that are showing are the ones whose intersection with the "view window" is non-empty.
About telling when a new row of divs have become visible: you will probably need a updateInterface() method anyway. Use this method to keep track of the divs showing, and when divs that weren't showing before enter the view window, fire a event handler of sorts.
About implementation: you should probably have the view window be itself a div with overflow: hidden and position: relative. Having a relative position attribute in CSS means that a child with absolute position top 0, left 0 will be at the top-left edge of the container (the view area, in your case).
About efficiency: depending on how fast your "determine which divs are showing" algorithm ends up being, you can try handling the intersection detection only when the user stops dragging, not on the mouse move. You should also preload the areas immediately around your current view window, so that if the user doesn't drag too far away, they will already be loaded.
Some further reference:
Tile5: Tiling Interfaces
gTile: Javascript tile based game engine
Experiments in rendering a Tiled Map in javascript/html…
There's no reason to implement this yourself, really, unless it's just a fun project. There are several open source libraries that handle online mapping.
To answer your question, you need to have an orthophoto-type image (an image aligned with the coordinate space) and then a mapping from pixel coordinates (i.e. the screen) to world coordinates. If it's not map images, just arbitrary large images then, again, you need to create a mapping between the pixel coordinates of the source image at various zoom levels and the view-port's coordinates.
If you read Google Map's SDK documentation you will see explanations of these terms. It's also a good idea to explore one of the aforementioned existing libraries, read its documentation and see how it's done.
But, again, if this is real work, don't implement it yourself. There's no reason to.