I've been testing mobile support for Three.js and have discovered a quirk when it comes to the setSize function and multiple views. Here's the scenario.
While loading the webgl_multiple_views example on my Nexus 7 (Android OS 4.3) in the Chrome for Android (29.0.1547.59) the entire rendered window is mis-aligned as can be seen in this screenshot.
At first I suspected a setViewport related issue but after further inspection determined that the WebGLRenderer.js function setSize was attempting to correct the WebGL canvas context size by multiplying by the devicePixelRatio like so:
_canvas.width = width * this.devicePixelRatio;
_canvas.height = height * this.devicePixelRatio;
This to me, seems a perfectly reasonable approach, but the problem here is that with some Android devices the calculation is seemingly already implied by the system, causing a skewed scene.
I've found that by implying a default pixel ratio of 1 I can correct the issue, but I'm anticipating it will likely break a lot of properly behaving mobile devices. Here's the "fix" if you will:
renderer = new THREE.WebGLRenderer( { antialias: true, alpha: false, devicePixelRatio: 1 } );
My question is, has anyone else encountered this? Does anyone have a more consistent fix for generally scaling a canvas context so it respects device pixel ratio, but doesn't break some android devices?
Thank-you kindly for any advice/assistance.
Try this function. I have tested it extensively and it reports the correct device width, height and a computedPixelRatio for every device I have tried...
function getDeviceDimensions() // get the width and height ofthe viewing device based on its OS
{
screenWidth = window.screen.width; // get the width
screenHeight = window.screen.height; // get the height
var computedPixelRatio = (window.screen.availWidth / document.documentElement.clientWidth); // a more reliable measure of device pixel ratio due to android firefox bugs...
computedPixelRatio = window.devicePixelRatio >= computedPixelRatio ? window.devicePixelRatio : computedPixelRatio; // select whichever value is larger between pixel ratio methods
if (navigator.userAgent.indexOf('Android') != -1) // if the client is on an android system
{
screenWidth = screenWidth / computedPixelRatio; // divide the reported width by the pixel ratio
screenHeight = screenHeight / computedPixelRatio; // divide the reported height by the pixel ratio
}
//alert(screenWidth + " " + screenHeight + " " + window.devicePixelRatio + " " + computedPixelRatio);
}
Related
I develop a html5 canvas browser game with createjs (Adobe Animate) and after the last update of Samsung Internet browser when the user clicks somewhere, it fires the click event for totally different button. It looks like the createjs native method _getObjectsUnderPoint() gives the wrong target of the click. Even when I click on empty space on the stage, where there is not buttons, a click handler is fired. This strange behavior is not present in Google Chrome or Opera for mobile, only in Samsung Internet.
In rare cases, even though the click target is correct, the handler is not called. More specifically, the problem must be in the createjs functions _testMask() or _testHit(). They use the canvas context and matrix transformations.
Does anyone has idea what should be fixed in the CreateJs library for Samsung Internet browser.
I solved the problem like this:
Check if the user agent contains SamsungBrowser
var userAgent = navigator.userAgent;
userAgent = userAgent.toLowerCase();
var isSamsungBrowser = userAgent.indexOf("samsungbrowser") != -1;
In CreateJs in _getObjectsUnderPoint() I added this near the end of the
for loop:
if(isSamsungBrowser) {
var bounds = child.getBounds();
if(bounds) {
var p = child.globalToLocal(x, y);
var hit = p.x >= bounds.x &&
p.x <= bounds.x + bounds.width &&
p.y >= bounds.y &&
p.y <= bounds.y + bounds.height;
if(! hit) {
continue;
}
}
else {
console.error('NO BOUNDS FOR', child);
continue;
}
}
The only drawback is that every Shape must have bounds. So you can convert the shapes to bitmaps in Adobe Animate or use setBounds().
I am starting to work with Spark AR studio and I looking for to get the screen size in pixel to compare the coordinate obtained by the gesture.location on Tap.
TouchGestures.onTap().subscribe((gesture) => {
// ! The location is always specified in the screen coordinates
Diagnostics.log(`Screen touch in pixel = { x:${gesture.location.x}, y: ${gesture.location.y} }`);
// ????
});
The gesture.location is in pixel (screen coordinate) and would like to compare it with the screen size to determine which side of the screen is touched.
Maybe using the Camera.focalPlane could be a good idea...
Update
I tried two new things to have the screen size:
const CameraInfo = require('CameraInfo');
Diagnostics.log(CameraInfo.previewSize.height.pinLastValue());
const focalPlane = Scene.root.find('Camera').focalPlane;
Diagnostics.log(focalPlane.height.pinLastValue());
But both return 0
This answer might be a bit late but it might be a nice addition for people looking for a solution where the values can easily be used in script, I came across this code(not mine, forgot to save a link):
var screen_height = 0;
Scene.root.find('screenCanvas').bounds.height.monitor({fireOnInitialValue: true}).subscribe(function (height) {
screen_height = height.newValue;
});
var screen_width = 0;
Scene.root.find('screenCanvas').bounds.width.monitor({fireOnInitialValue: true}).subscribe(function (width) {
screen_width = width.newValue;
});
This worked well for me since I couldn't figure out how to use Diagnostics.log with the data instead of Diagnostics.watch.
Finally,
Using the Device Info in the Patch Editor and passing these to the script works!
First, add a variable "to script" in the editor:
Then, create that in patch editor:
And you can grab that with this script:
const Patches = require('Patches');
const screenSize = Patches.getPoint2DValue('screenSize');
My mistake was to use Diagnostic.log() to check if my variable worked well.
Instead use Diagnostic.watch():
Diagnostic.watch('screenSize.x', screenSize.x);
Diagnostic.watch('screenSize.y', screenSize.y);
Screen size is available via the Device Info patch output, after dragging it to patch editor from the Scene section.
Now in the open beta (as of this post) you can drag Device from the scene sidebar into the patch editor to get a patch that outputs screen size, screen scale, and safe area inserts as well as the self Object.
The Device patch
The device size can be used in scripts using CameraInfo.previewSize.width and CameraInfo.previewSize.height respectively. For instance, if you wanted to get 2d points representing the min/max points on the screen, this'd do the trick.
const CameraInfo = require('CameraInfo')
const Reactive = require('Reactive')
const min = Reactive.point2d(
Reactive.val(0),
Reactive.val(0)
)
const max = Reactive.point2d(
CameraInfo.previewSize.width,
CameraInfo.previewSize.height
)
(The point I want to emphasize being that CameraInfo.previewSize.width and CameraInfo.previewSize.height are ScalarSignals, not number literals.)
Edit: Here's a link to the documentation: https://sparkar.facebook.com/ar-studio/learn/documentation/reference/classes/camerainfomodule
I would like to integrate screen notch support for my cordova application.
However a couple months ago the iPhone X is was the only smartphone with a screen notch so the detection and solution for it was pretty easy:
(function(){
// Really basic check for the ios platform
// https://stackoverflow.com/questions/9038625/detect-if-device-is-ios
var iOS = /iPad|iPhone|iPod/.test(navigator.userAgent) && !window.MSStream;
// Get the device pixel ratio
var ratio = window.devicePixelRatio || 1;
// Define the users device screen dimensions
var screen = {
width : window.screen.width * ratio,
height : window.screen.height * ratio
};
// iPhone X Detection
if (iOS && screen.width == 1125 && screen.height === 2436) {
alert('iPhoneX Detected!');
}
})();
I could then, with javascript, apply a top-offset of 20px so the screen notch support is complete.
However as more and more phones start using this screen notch the detection gets a lot more complicated and I don't know where to start. Does anyone have a good idea on how one would settle this problem?
As you can see below a lot of smartphone companies are starting to use the screen notch and a good app should support all devices right?
Phones with screen notch:
Huawei P20 series
Asus ZenFone 5 and 5Z
Huawei Honor 10
Oppo R15 and R15 Pro
Oppo F7
Vivo V9
Vivo X21 and X21 UD
OnePlus 6
LG G7 ThinQ
Leagoo S9
Oukitel U18
Sharp Aquos S3
...
The css safe area does work fine on iOs, but it doesn't on android. Since I had to detect notches on android, I wrote a small cordova plugin which allows you to fetch the window insets:
https://www.npmjs.com/package/cordova-plugin-android-notch
window.AndroidNotch.hasCutout(success => function(cutout) => {
alert("Cutout: " + cutout);
}));
window.AndroidNotch.getInsetTop(success => function(insetSize) => {
alert("Top Inset: " + insetSize);
}));
// There is also getInsetRight, getInsetBottom, getInsetLeft
Your best bet to support all notch devices would be to use css "safe area", instead of trying to keep a catalogue of all devices with notches and applying your logic.
Tutorial:
https://blog.phonegap.com/displaying-a-phonegap-app-correctly-on-the-iphone-x-c4a85664c493
[UPDATE]: This does not work on Android devices, despite being supported according to MDN: https://developer.mozilla.org/en-US/docs/Web/CSS/env
This may help too. Since I don't add padding to my body I know that if padding was added it was because of the camera notch. So I use this code in Angular to get the top and bottom padding (or zero).
ngAfterViewInit() {
const topPadding = document.body.style.paddingTop;
const botPadding = document.body.style.paddingBottom;
const regex = /\d+/;
const tp = regex.exec(topPadding);
const bt = regex.exec(botPadding);
const toppad = (tp) ? parseInt(tp[0].valueOf(), 10) : 0;
const botpad = (bt) ? parseInt(bt[0].valueOf(), 10) : 0;
// use toppad and botpad however you like.
...etc...
I've implemented simple example based on this
My example uses chipmunk along with cocos2d-js.
The problem is that physic only works with web builds. With the other builds (native ones - ios, mac, win32) all object are shown but they just hang - no animation.
My update method is called with specified intervals, where I execute "step" method on space object.
All my sprites are loaded using PhysicSprite class.
PS: I'm using cocos2d-js v3.0alpha
Use this tutorial:
http://www.cocos2d-x.org/docs/tutorial/framework/html5/parkour-game-with-javascript-v3.0/chapter6/en
I tried it both in the browser and in the iphone simulator and it worked just fine.
you should apply impulse on your physics body, they will move surely but if you would try to move the body with schedular by changing its coordinate on every call they will work on web but not on native ones like iOS or mac .
for example:-
var mass = 1;
var width = 1, height = 1;
playerBody = new cp.Body(mass , cp.momentForBox(mass, width, height));
playerBody.applyImpulse(cp.v(200, 300), cp.v(0, 0));// now you can move your playerBody
it will work well on all the platform but if you try my alternate solution
ie:-
init: function{
var mass = 1;
var width = 1, height = 1;
this.playerBody = new cp.Body(mass , cp.momentForBox(mass, width, height));
this.schedule(this.move);
},
move: function(dt){
this.playerBody.getPos().x += 2 * dt;
this.playerBody.getPos().y += 2 * dt;
}
this will work on web but on native platform like iOS or mac it will not move the playerBody at all. i don't know the reason yet if i got one i will let you know
I am working for html5 games and everything was working well in chrome, firefox, safari, Ipad IO6, android, but it does not work well in IOS7 and Iphone4. The touch event does not work well. even the simply 'e.preventdefault' cant handle the double touch issue, its keeping zoom in and out. not like another device.
anyone had same issue ??
its part of my code.. sorry cant write too much.. I had some NDA
var ua = navigator.userAgent.toLowerCase();
var checks = Boolean(ua.match(/android/))||
Boolean(ua.match(/ipod/))||
Boolean(ua.match(/ipad/))||
Boolean(ua.match(/tablet/))||
Boolean(ua.match(/tablet pc/))
var touchable = checks && (typeof (document.ontouchstart) != 'undefined');
if(touchable){
canvas.addEventListener('touchstart',mouseDown,false);
canvas.addEventListener('touchmove',mouseMove,false);
document.addEventListener('touchend',mouseUp,false);
}else{
canvas.addEventListener('mousedown',mouseDown,false);
canvas.addEventListener('mousemove',mouseMove,false);
document.addEventListener('mouseup',mouseUp,false);
}
function getMousePos(evt)
{
if(touchable && evt.touches.length>1)return;
if(touchable)evt = evt.changedTouches[0];
var rect = canvas.getBoundingClientRect();
return {
x: (evt.clientX - rect.left) / game.scale.x,
y: (evt.clientY - rect.top) / game.scale.y
};
}
function mouseDown(e)
{
var mousePos = getMousePos(e);
alert(mousePos.x + "," + mousePos.y);
e.preventDefault();
}
I definitely faced this issue while testing phonegap apps on an iPhone4 running iOS7. I removed the 300ms delay with fastclick.js, set all transitions to 0, removed the hover-delay, and there was still significant lag when scrolling. Unfortunately, the issue seems to be directly related to the iPhone4 hardware attempting to run iOS7. Technically while iOS7 is available on the iPhone4, iOS7 was not designed with the iPhone4 in mind.