I'm looking for the best approach to achieve the following with today's web technologies:
A 3D model is rendered on screen and stays at a position. The position does not change and the model does not interact with the environment, therefore I'd call it "AR-light". If the user rotates his device, he will lose sight of the object, because it always stays at a position.
Later on, this position will be determined by a GPS signal, but that's something I don't focus right now.
Is there a good framework for this? AR.js seems to be too much for what I actually try to achieve.
Assuming you are using WebGL to display your model, then I believe three.js gives you some routines that can vary the camera angle and position based on device movements.
Threejs Rotating object with device orientation control
Alternatively, its usually best if you are learning something new, to try to access the device's accelerometer directly and try to convert movements into changes of camera position/orientation. see
How to detect movement of an android device?
Related
I have a web application used for virtual house tours. Currently I am using VRView for these tours and it has worked pretty good, however I’ve ran in to an issue with the gyroscope that I need fixed as soon as possible.
VRView will automatically rotate the camera based on a users device orientation. As a user turns their phone, the virtual house tour will also turn, so the user is able to “look around” the house. This is great for most use cases, however lower end devices have issues when processing this sort of change. I need a way for users to disable the automatic rotation, and simply swipe on their phones to look around.
I’ve tried the permissions api and trying to revoke access to gyroscope, but due to browser compatibility with that api, it doesn’t work. I also can’t find any documentation on this in the VRView library. Any help is much appreciated.
tldr;
You're right, this doesn't seem to be available via their API. It looks like you may have to fork the library and make some adjustments. If you want to go down this path, I'd suggest forking the repo, seeing if you can successfully disable the motion emitter, and then see if you can use the webvr-polyfill to initiate drag controls. It may also be possible to just disable the gyro-based rotation via webvr-polyfill directly.
More in-depth explanation:
The motion information is being published to the VR View iframes (which I believe then feed them to the webvr-polyfill controls) in two locations:
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/scripts/js/device-motion-sender.js#L35
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/api/iframe-message-sender.js#L45
When a browser's UA (user agent) flag indicates it can't use gyro controls, you would need to include a flag in that disables this functionality (or disables the listener in the iframe).
Normally, to enable drag rotation, I think you would then need to write a listener for the start and end of drag events that would translate those events into camera rotation. (Something similar to what this person is suggesting: https://github.com/googlearchive/vrview/issues/131#issuecomment-289522607)
However, it appears that the controls are imported via web-vr-polyfill. The 'window.WebVRConfig' object is coming from web-vr-polyfill, if I'm following this correctly.
See here: https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/embed/main.js#L77
The code above looks like VR View is adjusting the WebVRConfig when it detects a certain flag (in this case the 'YAW_ONLY' attribute). I believe you would have to do something similar.
https://github.com/immersive-web/webvr-polyfill
See here for the YAW_ONLY attribute: https://github.com/immersive-web/webvr-polyfill/blob/e2958490653bfcda4df83881d554bcdb641cf45b/src/webvr-polyfill.js#L68
See here for an example adjusting controls in webvr-polyfill:
https://github.com/immersive-web/webvr-polyfill#using
Question:
I've been working on a first-person maze game with Threejs. I have recently included DeviceOrientationControls to start moving it towards VR, but my previous system for moving the camera is separated from the camera. The camera no longer moves with the arrow keys.
How can I move my camera using arrow keys again while the camera is updated with DeviceOrientationControls?
If possible, how can I automate forward movement relative to the camera's perspective?
Update:
Alex Fallenstedt found a perfect example of what I wanted.
However, I have some questions;
How does the script make the camera move?
How can I simplify this and/or implement this into my work?
Resources:
How to control camera both with keyboard and mouse - three.js
Detecting arrow key presses in JavaScript
How to move camera along a simple path
How to control camera movement with up,down,left,right keys on the keyboard
Comparison:
Here's how it behaved prior (with working controls)
http://orenda.ga/stackoverflow/Nonvr.mp4
Here's how it behaves now (with Orientation)
http://orenda.ga/stackoverflow/VR.mp4
Note:
I've not included my script since I think that it isn't needed for this question. If you need it, please ask and I will insert it here.
To answer you two questions:
1) How does the script make the camera move?
Lets break the script up to its fundamentals. The script begins by adding a bit of state to determine if the user is moving forward.. This state changes when the user interacts with W,A,S,D. We add event listeners that will change this state when a user presses a key or lifts up from a key.. Now, every single frame that is rendered, we can add velocity in specific directions depending on the state of what keys are pressed. This happens here. We now have a concept of velocity. You should console log the velocity values in animate() and checkout how it changes as you move around.
So how does this velocity variable actually move the camera? Well, this script is also using an additional script called PointerLockControls. You can see the full script for PointerLockControls here. Notice how PointerLockControls' only argument is a camera. In order to move the camera, we need to use some nifty functions in PointerLockControls like getObject.. This returns the THREE.js object that you passed into PointerLockControls (aka the camera).
Once we can obtain the camera, we can then move the camera by translating its x, y, and z values by the velocity we created earlier in our animate function.. You can read more about translating objects with these methods in in the Object3D documentation.
2) How can I simplify this and/or implement this into my work?
This is probably the easiest way to implement first person controls to a camera. Everything else in the script example I shared with your earlier is for checks. Like if the user is in a threeJS object, allow them to jump. Or adding a concept of mass and gravity to always pull the user down after they jump. One thing that you might find very useful is to check if the pointer is in the browser's window!. Take the example, break it down to its simplest components, and build from there.
Here is a codepen example that shows you how to move the camera forward smoothly, without pointerlockcontrols, Start at line 53 https://codepen.io/Fallenstedt/pen/QvKBQo
I am working on a mobile web project that needs to know the compass direction the user's device is pointing. It's incredibly simple right now, but here's what I have:
var updateDirection = function (evt) {
$("#direction").val(evt.alpha);
};
window.addEventListener("deviceorientation", updateDirection);
Based on what I've researched so far, the alpha channel of the event should be the compass position. However, I've observed (and read) that there are a wide variety of implementations based on OS and browser.
On my HTC smartphone, in Chrome or the default Android browser, I only get a reasonable reading (0 degrees = North, 90 = East, so on) when I hold the phone perfectly vertical in a "selfie" position. Any angle to the phone throws readings quite far off.
On my Surface Pro using Chrome, I can't get a reading greater than about 50.
On my Surface Pro using Edge, I get very reasonable readings, but only when I hold the device horizontal, as if it was laying on a table.
It seems likely that people have achieved getting the compass direction in a mobile browser regardless of device. Is there some library I can use to do this? Or is it necessary to simply code for many different specific scenarios, like this example, which also didn't work for all the devices listed:
Device Orientation Events
Is that really necessary for you to use javascript to find out orientation?
You could possibly achieve the same result with CSS media queries
I'm working on a web-app and I would like to have some first hand experience on how our users actually use our software. This is my idea:
*Use javascript to save the html-DOM and cursor-position. Possibly only the changes to the DOM to reduce the amount of data.
*Save it to the server along with the users browser used.
Do a javascript that updates the DOM according to the recording and an image that replicate the mouse movements in the corresponding browser.
Has this ever been done before?
Would this work in most cases?
As circle73 said, you can use HTML5 to do this via canvas, however, I don't think that would track the mouse position. You could write a JavaScript function to track the mouse coords every x seconds, you'd just have to time it with the screen captures so you can match up the mouse movements with the captured frames.
Your other options would be to do this via an ActiveX control as answered here: Take a screenshot of a webpage with JavaScript?
I would approach this with the following high-level strategy:
Use jQuery mouseover to record the user's mouse positions on the page. Store these positions (x,y coordinates) locally. Send a structured request to your server with these coordinates.
Use a browser automation framework like Selenium to "play" the stored coordinates. You can use the same path as your user, only in development, in order to see what he saw. For example:
void mouseMove(WebElement toElement, long xOffset, long yOffset)
This moves (from the current location) to new coordinates. There's more info here.
Take screenshots with of the page with Selenium WebDriver. More info here.
I recently spent some time playing a game called Draw Something (Android, iOS). I like the way one player can draw to the screen and the drawing will be re-created for the second player. I want to use something similar to this on my website, but I'm not sure how.
The project I'm working on will use a One-To-Many connection, rather than Draw Something's One-To-One connection. Essentially a user will make a drawing and it should be recreated for anyone who views it.
Is it possible to do this on the web using some combination of HTML5, JS, and Python?
Easily done with ontouchstart, ontouchmove and ontouchend. Example: http://ontouchstart.github.com/
Just track the coordinates of the touch event (or mouse, but use onmousedown, onmousemove and onmouseup) and send it to the server. The server then sends the data to the other clients which draw everything based on the coordinates from the events.
Here is a library using Raphaeljs http://ianli.com/sketchpad/. It stores the drawing in a JSON format that you could use to redraw wherever you need it. I'm not sure how well it would be suited for what you want to do, but it might work.