I'm working on a web-app and I would like to have some first hand experience on how our users actually use our software. This is my idea:
*Use javascript to save the html-DOM and cursor-position. Possibly only the changes to the DOM to reduce the amount of data.
*Save it to the server along with the users browser used.
Do a javascript that updates the DOM according to the recording and an image that replicate the mouse movements in the corresponding browser.
Has this ever been done before?
Would this work in most cases?
As circle73 said, you can use HTML5 to do this via canvas, however, I don't think that would track the mouse position. You could write a JavaScript function to track the mouse coords every x seconds, you'd just have to time it with the screen captures so you can match up the mouse movements with the captured frames.
Your other options would be to do this via an ActiveX control as answered here: Take a screenshot of a webpage with JavaScript?
I would approach this with the following high-level strategy:
Use jQuery mouseover to record the user's mouse positions on the page. Store these positions (x,y coordinates) locally. Send a structured request to your server with these coordinates.
Use a browser automation framework like Selenium to "play" the stored coordinates. You can use the same path as your user, only in development, in order to see what he saw. For example:
void mouseMove(WebElement toElement, long xOffset, long yOffset)
This moves (from the current location) to new coordinates. There's more info here.
Take screenshots with of the page with Selenium WebDriver. More info here.
Related
We are creating an app that lets a user capture a number of images and it will try to create a 3D model of the target object. In order to help the users capture useful images we give them some guidance while they move their phone from one capture to the next.
We have a nice prototype working by means of navigator.mediaDevices.getUserMedia() that captures video, displays it in a <video> element, and has an overlay that shows how to move the phone. When they are ready they press a button and we grab the current frame of the streamed video.
We were quite happy with this until we realized that very often the captured image would not have enough quality; mainly they tend to be a bit blurred because the user may not hold the device totally still. This causes the math behind creating the 3D model to fail.
I am now tasked with attempting to improve this but I think I don't have many options. Here is what I have been investigating and their drawbacks:
JavaScript's ImageCapture API. This seems to be exactly what we need: a way to actually take a picture instead of grabbing a frame from a video. While the API has still an experimental status, it seems pretty stable and Chrome has it implemented since version 59. The problem is that Safari (our main target) does not have it implemented and it seems they won't ever do. I can't really find information on what their plan is though but as of today, this is not an option.
Use the input element of type file with the attribute capture. While this lets me capture images with the native camera, I cannot give the user any guide as far as I know.
Create a whole mobile app. This requires another year of work and requesting our existing users to install an app, which may not be possible. Also leaves Android devices out which we'd prefer not to.
While typing this I thought of perhaps using the video instead of capturing the images, but not sure this would help in any way.
Instead of a different approach to the way of capturing the image, I could try to only grab the image if I can confirm that the device is as close as still as possible (using a threshold value). Perhaps I could use the gyroscope for this (we are using it to check they have moved the device to a place and angle we consider useful for the process). The drawback of this is that I am not sure it would really mitigate our problem... how still is still enough? is it possible for the person to be that still for a second?
So my question here is, can anyone think of another alternative to those I descrived? or perhaps improve one of the enumerated ones?
BTW does anyone know what are Apple's plans for the ImageCapture API?
Background
I'm writing a browser extension which paints over the map of komoot.com/plan.
Currently I do this by placing a canvas on top of the existing canvas.
This works well but it is static and does not yet react to when the user moves the map around or zooms into it or the website focusses a particular location on the map.
Question
How do I best tie into this event loop of map updates?
Approaches considered
I could mimic / reimplement how komoot processes user input, but this sounds fragile and unreliable and messy. I would do this by adding listeners for mouse button events and cursor movement, etc.
The page's URL contains the lat and long coordinates together with the zoom level, e.g., https://www.komoot.com/plan/#49.9535480,5.3956956,11.345z. It changes after the map has changed. I assume there's a way to be notified of changes in the URL. If so I could then dynamically update my canvas.
This would still require some level of imitation of the page's internals. However, considerably less so than option 1.
Doing so I could only update my canvas after the animation is finished. Not a deal breaker but ideally I'd want to update it frame by frame together with the map itself for a more pleasing user experience.
Additional Details
Komoot seems to be using mapbox-gl
It's a Manifest 2 Content Script extension
This is my first browser extension ever
I'm writing this in Scala.js using this excellent template
Don't let this keep you from posting javascript solutions or pointing me to javascript documentation!
Screenshot
You don't seem to have mentioned the obvious way: use the move event:
map.on('move',e => {...
// get center with map.getCenter()
});
But it's not really clear exactly what you're trying to do, so hard to advise more specifically.
I have a web application used for virtual house tours. Currently I am using VRView for these tours and it has worked pretty good, however I’ve ran in to an issue with the gyroscope that I need fixed as soon as possible.
VRView will automatically rotate the camera based on a users device orientation. As a user turns their phone, the virtual house tour will also turn, so the user is able to “look around” the house. This is great for most use cases, however lower end devices have issues when processing this sort of change. I need a way for users to disable the automatic rotation, and simply swipe on their phones to look around.
I’ve tried the permissions api and trying to revoke access to gyroscope, but due to browser compatibility with that api, it doesn’t work. I also can’t find any documentation on this in the VRView library. Any help is much appreciated.
tldr;
You're right, this doesn't seem to be available via their API. It looks like you may have to fork the library and make some adjustments. If you want to go down this path, I'd suggest forking the repo, seeing if you can successfully disable the motion emitter, and then see if you can use the webvr-polyfill to initiate drag controls. It may also be possible to just disable the gyro-based rotation via webvr-polyfill directly.
More in-depth explanation:
The motion information is being published to the VR View iframes (which I believe then feed them to the webvr-polyfill controls) in two locations:
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/scripts/js/device-motion-sender.js#L35
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/api/iframe-message-sender.js#L45
When a browser's UA (user agent) flag indicates it can't use gyro controls, you would need to include a flag in that disables this functionality (or disables the listener in the iframe).
Normally, to enable drag rotation, I think you would then need to write a listener for the start and end of drag events that would translate those events into camera rotation. (Something similar to what this person is suggesting: https://github.com/googlearchive/vrview/issues/131#issuecomment-289522607)
However, it appears that the controls are imported via web-vr-polyfill. The 'window.WebVRConfig' object is coming from web-vr-polyfill, if I'm following this correctly.
See here: https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/embed/main.js#L77
The code above looks like VR View is adjusting the WebVRConfig when it detects a certain flag (in this case the 'YAW_ONLY' attribute). I believe you would have to do something similar.
https://github.com/immersive-web/webvr-polyfill
See here for the YAW_ONLY attribute: https://github.com/immersive-web/webvr-polyfill/blob/e2958490653bfcda4df83881d554bcdb641cf45b/src/webvr-polyfill.js#L68
See here for an example adjusting controls in webvr-polyfill:
https://github.com/immersive-web/webvr-polyfill#using
I'm currently trying to track a persons drawing movements, and saving them to a database.
On my webpage there is a canvas, which allows the user to draw using the mouse. What I would like is to be able to save the movements that the user makes while drawing, so that I am able to re-trace every movement made while drawing.
My own thoughts for a solution is that whenever the user clicks his mouse within the canvas, the coordinates will be saved until the user releases the mouse button. Another solution is to save an image of the canvas every 3-4 clicks in the canvas, so you are able to kind of see the drawing process.
Does anyone have a better solution, or some tips on how to best achieve such a feature?
UPDATE:
So I may not have been specific enough in my description. Basically I want to record the drawing process for a user by saving the coordinates, so that I am able to retrieve these coordinates from the database and play back the users movements while drawing.
The coordinates will be saved to the database when the user presses a save-button, so I need to store all the coordinates until the button is pressed.
I would like help on both the client- and server-side. The server-side is written in Java. I am currently using JavaScript on the client-side and MySQL as my database.
Sending the coordinates to a server while the person is drawing may cause a lot of traffic, and may slow down your server's response with other users.
Instead of capturing and sending every movement, rather:
buffer the movements from mouse-down until mouse-up, and then send that "draw instance" to the server;
capturing these draw instances in an array and when the user is finished drawing -then only submit to server may be even better
Making use of a "web-socket" instance per user may be advantageous as this will be a lot faster than HTTP-request (AJAX) and at the same time, if you have multiple people drawing at the same time, the web-socket can push the data of other users to each connected user's live drawing.
I'm not sure what you need to accomplish here, but, the above may help.
Perhaps if you can elaborate what it is you want to accomplish, a more detailed answer can be drafted with specific instructions on how to accomplish what you need.
I recently spent some time playing a game called Draw Something (Android, iOS). I like the way one player can draw to the screen and the drawing will be re-created for the second player. I want to use something similar to this on my website, but I'm not sure how.
The project I'm working on will use a One-To-Many connection, rather than Draw Something's One-To-One connection. Essentially a user will make a drawing and it should be recreated for anyone who views it.
Is it possible to do this on the web using some combination of HTML5, JS, and Python?
Easily done with ontouchstart, ontouchmove and ontouchend. Example: http://ontouchstart.github.com/
Just track the coordinates of the touch event (or mouse, but use onmousedown, onmousemove and onmouseup) and send it to the server. The server then sends the data to the other clients which draw everything based on the coordinates from the events.
Here is a library using Raphaeljs http://ianli.com/sketchpad/. It stores the drawing in a JSON format that you could use to redraw wherever you need it. I'm not sure how well it would be suited for what you want to do, but it might work.