Related
Having issues with canvas being attached to the document on iphone. When running: cordova compile then: cordova launch browser the canvas is added and renders fine. Using xcode and the ios simulator the app launches but it appears that the canvas never gets rendered. Am I doing something wrong here?
onDeviceReady: function() {
this.receivedEvent('deviceready');
var Container = PIXI.Container,
autoDetectRenderer = PIXI.autoDetectRenderer,
loader = PIXI.loader,
resources = PIXI.loader.resources,
Sprite = PIXI.Sprite,
Rectangle = PIXI.Rectangle,
TextureCache = PIXI.TextureCache,
Graphics = PIXI.Graphics,
Text = PIXI.Text,
ParticleContainer = PIXI.ParticleContainer,
stage,
renderer;
stage = new Container();
renderer = new autoDetectRenderer(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.view);
},
On that code example you never actually render the container. Check PIXI examples e.g. here: https://pixijs.github.io/examples/#/basics/basic.js
So you need to also call:
renderer.render(stage);
And you will most likely need requestAnimationFrame(functionToCall); call there to actually run the game, when you have moving parts.
Like in the other answer, you need to call renderer.render(stage) recursively with the native method requestAnimationFrame. Furthermore, Cordova is just a web page running within a Web View, this example will run in any web browser.
onDeviceReady: function() {
this.receivedEvent('deviceready');
var Container = PIXI.Container,
autoDetectRenderer = PIXI.autoDetectRenderer,
loader = PIXI.loader,
resources = PIXI.loader.resources,
Sprite = PIXI.Sprite,
Rectangle = PIXI.Rectangle,
TextureCache = PIXI.TextureCache,
Graphics = PIXI.Graphics,
Text = PIXI.Text,
ParticleContainer = PIXI.ParticleContainer,
stage,
renderer;
stage = new Container();
renderer = new autoDetectRenderer(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.view);
// requestAnimationFrame will make gameLoop recursive
function gameLoop() {
// Loop this function 60 times per second
requestAnimationFrame(gameLoop);
//HERE: <-- the logic of your game or animation
renderer.render(stage);
}
gameLoop();
},
For additional explanation about requestAnimationFrame read the following link https://www.paulirish.com/2011/requestanimationframe-for-smart-animating/
I'm trying to load obj files using three.js and webGL. Currently I can load one object by directly changing the code, but I want users to be able to submit their own .obj files.
So far, this is the code that I have. I tried using various examples but I can't really grasp how to use it correctly. I think I might have to write a function to update, but would that require redoing everything in init? (By "redo" I pretty much mean copy/paste.) Is there an easier way of doing it?
relevant parts of html file:
<form id="upload" method="get">
.obj Upload: <input type="text" size="20" name="objfile" id="objfile">
<!-- <input type="button" id="objsubmit" value="Send"> -->
</form>
<script type="text/javascript" src="cubemain.js"></script>
cubemain.js:
var scene, camera, renderer;
var container;
var objFile;
window.onload=function() {
init();
render();
}
function init() {
// upload
objFile = document.getElementById("objfile");
// set up scene, camera, renderer
scene = new THREE.Scene();
var FOV = 70;
var ASPECT = window.innerWidth/window.innerHeight;
var NEAR = 1;
var FAR = 1000;
camera = new THREE.PerspectiveCamera(FOV, ASPECT, NEAR, FAR);
// move camera back otherwise in same position as cube
camera.position.z = 5;
var light = new THREE.AmbientLight(0x101030);
scene.add(light);
var directionalLight = new THREE.DirectionalLight( 0xffeedd );
directionalLight.position.set( 0, 0, 1 );
scene.add( directionalLight );
// use OBJLoader
var loader = new THREE.OBJLoader();
loader.load(objFile.value + ".obj", function(object) {
scene.add(object);
});
renderer = new THREE.WebGLRenderer();
renderer.setSize(.70*window.innerWidth, .75*window.innerHeight);
document.body.appendChild(renderer.domElement);
renderer.setClearColor(0xCC99FF, 1);
var controls = new THREE.OrbitControls(camera, renderer.domElement);
}
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
}
It really depends on various aspects. What I did was simple.
//In the html.html
The object: <input type="text" id="inputFile">
<input type="button" id="submitBtn" onclick="loadTheFile()">
Now when the button is clicked, it will trigger a function in your javascript file. All you have to do now is to create the loadTheFile function and another function that loads the obj file. Here a simple snippet of it.
//The rest of your javascript file
function loadTheFile(){
//First we need to get the value of the text. We shall store the name of
//the file in a fileName variable
var fileName = document.getElementById("inputFile").value;
//Now what you should do is to run another function, let us call it loadingFile
//which has the argument fileName, the name of the file that we have received
loadingFile(fileName);
}
function loadingFile(fileName){
//Use your loader and instead of the static path, use fileName as your new path
//Please note that the path that you pass into the loader is a relative path
//to your main applciation. So for example you store your file in the model folder
//and the fileName variable is just a file name and not a relative directory, then
//you should add the path of the folder like as follows:
var completePath = "models/" + filename;
//the rest of your loader
// use OBJLoader
var loader = new THREE.OBJLoader();
loader.load(completePath + ".obj", function(object) {
scene.add(object);
});
}
Hope that helps!
To help you further, since we have declared and define a helper function, loadingFile, you can change the loader part in your init() function as follows:
before:
var loader = new THREE.OBJLoader();
loader.load(objFile.value + ".obj", function(object) {
scene.add(object);
});
after:
loadingFile(objFile.value);
Follow up question: Does your code run properly? As in you can see the object? This is because I see that your code is much cleaner and more concise than what I usually use; so if it does I can use your code to improve the readability of mine. Thank you very much and I really hope my answer helps you! ^u^
I am attempting to create a thumbnail preview from a video file (mp4,3gp) from a form input type='file'. Many have said that this can be done server side only. I find this hard to believe since I just recently came across this Fiddle using HTML5 Canvas and Javascript.
Thumbnail Fiddle
The only problem is this requires the video to be present and the user to click play before they click a button to capture the thumbnail. I am wondering if there is a way to get the same results without the player being present and user clicking the button. For example: User click on file upload and selects video file and then thumbnail is generated. Any help/thoughts are welcome!
Canvas.drawImage must be based on html content.
source
here is a simplier jsfiddle
//and code
function capture(){
var canvas = document.getElementById('canvas');
var video = document.getElementById('video');
canvas.getContext('2d').drawImage(video, 0, 0, video.videoWidth, video.videoHeight);
}
The advantage of this solution is that you can select the thumbnail you want based on the time of the video.
Recently needed this so I wrote a function, to take in a video file and a desired timestamp, and return an image blob at that time of the video.
Sample Usage:
try {
// get the frame at 1.5 seconds of the video file
const cover = await getVideoCover(file, 1.5);
// print out the result image blob
console.log(cover);
} catch (ex) {
console.log("ERROR: ", ex);
}
Function:
function getVideoCover(file, seekTo = 0.0) {
console.log("getting video cover for file: ", file);
return new Promise((resolve, reject) => {
// load the file to a video player
const videoPlayer = document.createElement('video');
videoPlayer.setAttribute('src', URL.createObjectURL(file));
videoPlayer.load();
videoPlayer.addEventListener('error', (ex) => {
reject("error when loading video file", ex);
});
// load metadata of the video to get video duration and dimensions
videoPlayer.addEventListener('loadedmetadata', () => {
// seek to user defined timestamp (in seconds) if possible
if (videoPlayer.duration < seekTo) {
reject("video is too short.");
return;
}
// delay seeking or else 'seeked' event won't fire on Safari
setTimeout(() => {
videoPlayer.currentTime = seekTo;
}, 200);
// extract video thumbnail once seeking is complete
videoPlayer.addEventListener('seeked', () => {
console.log('video is now paused at %ss.', seekTo);
// define a canvas to have the same dimension as the video
const canvas = document.createElement("canvas");
canvas.width = videoPlayer.videoWidth;
canvas.height = videoPlayer.videoHeight;
// draw the video frame to canvas
const ctx = canvas.getContext("2d");
ctx.drawImage(videoPlayer, 0, 0, canvas.width, canvas.height);
// return the canvas image as a blob
ctx.canvas.toBlob(
blob => {
resolve(blob);
},
"image/jpeg",
0.75 /* quality */
);
});
});
});
}
Recently needed this and did quite some testing and boiling it down to the bare minimum, see https://codepen.io/aertmann/pen/mAVaPx
There are some limitations where it works, but fairly good browser support currently: Chrome, Firefox, Safari, Opera, IE10, IE11, Android (Chrome), iOS Safari (10+).
video.preload = 'metadata';
video.src = url;
// Load video in Safari / IE11
video.muted = true;
video.playsInline = true;
video.play();
You can use this function that I've written. You just need to pass the video file to it as an argument. It will return the dataURL of the thumbnail(i.e image preview) of that video. You can modify the return type according to your need.
const generateVideoThumbnail = (file: File) => {
return new Promise((resolve) => {
const canvas = document.createElement("canvas");
const video = document.createElement("video");
// this is important
video.autoplay = true;
video.muted = true;
video.src = URL.createObjectURL(file);
video.onloadeddata = () => {
let ctx = canvas.getContext("2d");
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
ctx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);
video.pause();
return resolve(canvas.toDataURL("image/png"));
};
});
};
Please keep in mind that this is a async function. So make sure to use it accordingly.
For instance:
const handleFileUpload = async (e) => {
const thumbnail = await generateVideoThumbnail(e.target.files[0]);
console.log(thumbnail)
}
The easiest way to display a thumbnail is using the <video> tag itself.
<video src="http://www.w3schools.com/html/mov_bbb.mp4"></video>
Use #t in the URL, if you want the thumbnail of x seconds.
E.g.:
<video src="http://www.w3schools.com/html/mov_bbb.mp4#t=5"></video>
Make sure that it does not include any attributes like autoplay or controls and it should not have a source tag as a child element.
With a little bit of JavaScript, you may also be able to play the video, when the thumbnail has been clicked.
document.querySelector('video').addEventListener('click', (e) => {
if (!e.target.controls) { // Proceed, if there are no controls
e.target.src = e.target.src.replace(/#t=\d+/g, ''); // Remove the time, which is set in the URL
e.target.play(); // Play the video
e.target.controls = true; // Enable controls
}
});
<video src="http://www.w3schools.com/html/mov_bbb.mp4#t=5"></video>
With jQuery Lib you can use my code here. $video is a Video element.This function will return a string
function createPoster($video) {
//here you can set anytime you want
$video.currentTime = 5;
var canvas = document.createElement("canvas");
canvas.width = 350;
canvas.height = 200;
canvas.getContext("2d").drawImage($video, 0, 0, canvas.width, canvas.height);
return canvas.toDataURL("image/jpeg");;
}
Example usage:
$video.setAttribute("poster", createPoster($video));
I recently stumbled on the same issue and here is how I got around it.
firstly it will be easier if you have the video as an HTML element, so you either have it in the HTML like this
<video src="http://www.w3schools.com/html/mov_bbb.mp4"></video>
or you take from the input and create an HTML element with it.
The trick is to set the start time in the video tag to the part you want to seek and have as your thumbnail, you can do this by adding #t=1.5 to the end of the video source.
<video src="http://www.w3schools.com/html/mov_bbb.mp4#t=1.5"></video>
where 1.5 is the time you want to seek and get a thumbnail of.
This, however, makes the video start playing from that section of the video so to avoid that we add an event listener on the video's play button(s) and have the video start from the beginning by setting video.currentTime = 0
const video = document.querySelector('video');
video.addEventListener('click', (e)=> {
video.currentTime = 0 ;
video.play();
})
I have the ascii ply file loading fine now, but it has a texture and I can not seem to get it to load not matter how I configure things.
cameraMain = new THREE.Camera(8, MainWidth / MainHeight, 1, 10000);
sceneMain = new THREE.Scene();
ambientLightMain = new THREE.AmbientLight(0x202020);
directionalLightMainRight = new THREE.DirectionalLight(0xffffff, 0.5);
directionalLightMainLeft = new THREE.DirectionalLight(0xffffff, 0.5);
pointLightMain = new THREE.PointLight(0xffffff, 0.3);
directionalLightMainRight.position.x = dlpx;
directionalLightMainRight.position.y = dlpy;
directionalLightMainRight.position.z = dlpz;
directionalLightMainRight.position.normalize();
directionalLightMainLeft.position.x = -dlpx;
directionalLightMainLeft.position.y = dlpy;
directionalLightMainLeft.position.z = dlpz;
directionalLightMainLeft.position.normalize();
pointLightMain.position.x = plpx;
pointLightMain.position.y = plpy;
pointLightMain.position.z = plpz;
sceneMain.addLight(ambientLightMain);
sceneMain.addLight(directionalLightMainRight);
sceneMain.addLight(directionalLightMainLeft);
sceneMain.addLight(pointLightMain);
rendererMain = new THREE.WebGLRenderer();
texture = THREE.ImageUtils.loadTexture('3D/'+ mainURL +'.jpg', {}, function() {
rendererMain.render(sceneMain);
});
rendererMain.domElement.style.backgroundColor = backgroundColor;
rendererMain.setSize(MainWidth, MainHeight);
rendererMain.domElement.addEventListener('DOMMouseScroll', onRendererMainScroll, false);
rendererMain.domElement.addEventListener('mousewheel', onRendererMainScroll, false);
rendererMain.domElement.addEventListener('dblclick', onRendererMainDblClick, false);
rendererMain.domElement.addEventListener('mousedown', onRendererMainMouseDown, false);
$("#viewerMain").append(rendererMain.domElement);
window.addEventListener('mousemove', onMouseMove, false);
window.addEventListener('mouseup', onMouseUp, false);
That chunk of code initializes the model. Some of it is not needed for question purposes, it has code for zooming, rotating, etc. I know there is another line, I have it commented out but for some reason it wont show here so I will do it separately.
material = new THREE.MeshBasicMaterial({map: texture});
That line is after the loadTexture code.
The rest of it is loaded with a loadPly function
var geometryMain = new THREE.Geometry();
for (i in event.data.content[0])
geometryMain.vertices.push(new THREE.Vertex(new THREE.Vector3(event.data.content[0][i][0], event.data.content[0][i][1], event.data.content[0][i][2])));
for (i in event.data.content[1])
geometryMain.faces.push(new THREE.Face3(event.data.content[1][i][0], event.data.content[1][i][1], event.data.content[1][i][2]));
geometryMain.computeCentroids();
geometryMain.computeFaceNormals();
mainModel = new THREE.Mesh(geometryMain, new THREE.MeshLambertMaterial({color:0xffffff, shading:THREE.FlatShading}));
sceneMain.addObject(mainModel);
mainModel.overdraw = true;
mainModel.doubleSided = true;
I have tried tweaking the THREE.mesh() section to include the material but it breaks everything. I have tried just adding the map:material to the MeshLambertMaterial, also a no go. Anyone have any insight? Sorry if this is over complicated but i am far from knowing this enough to be efficient yet.
If I add
map:THREE.ImageUtils.loadTexture('3D/'+ mainURL +'.jpg')
to the THREE.Mesh() line, I get
.WebGLRenderingContext: GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 2
In order to display a texture on an object, it needs texture coordinates. You only seem to add vertices and faces to the geometry. Looking at THREE.PLYLoader sources, it doesn't seem to support loading the texture coords. (Although I'm not sure if you use that loader as it should already return a ready Geometry instance instead of requiring you to construct arrays manually)
Use THREE.MeshNormalMaterial. It shows texture when you use MeshNormalMaterial.
I'm trying to apply a THREE.ImageUtils.loadTextureCube() using the real time camera onto a spinning cube.
Until now, I managed to apply a simple texture using my video to a MeshLambertMaterial :
var geometry = new THREE.CubeGeometry(100, 100, 100, 10, 10, 10);
videoTexture = new THREE.Texture( Video ); // var "Video" is my <video> element
var material = new THREE.MeshLambertMaterial({ map: videoTexture });
Cube = new THREE.Mesh(geometry, material);
Scene.add( Cube );
That's OK and you can see the result at http://jmpp.fr/three-camera
Now I'd like to use this Video stream to have a brushed metal texture, so I tried to create another kind of material :
var videoSource = decodeURIComponent(Video.src);
var environment = THREE.ImageUtils.loadTextureCube([videoSource, // left
videoSource, // right
videoSource, // top
videoSource, // bottom
videoSource, // front
videoSource]); // back
var material = new THREE.MeshPhongMaterial({ envMap: environment });
... but it throws the following error :
blob:http://localhost/dad58cd1-1557-41dd-beed-dbfea4c340db 404 (Not Found)
I guess loadTextureCube() is trying to get the 6 array parameters as an image, but doesn't seems to appreciate a videoSource instead.
I'm beginning with three and wondered if there is a way to do that ?
Thx,
jmpp
There are two ways I could see. First, if you just want the same image but with some specular highlights/shininess then just change
var material = new THREE.MeshLambertMaterial({ map:texture});
to
var material = new THREE.MeshPhongMaterial({
map: texture ,
ambient: 0x030303,
specular: 0xffffff,
shininess: 90
});
and play with the ambient, specular, shininess settings to find what you like.
Second, if you really want to add effects to the video image itself, you could draw the image to a canvas, manipulate the pixels, and then set the texture image to that new image. This could also be done with custom shaders, avoiding the canvas step, but there are already libraries for applying image filters to elements, so I'd stick with that. That would work something like this:
You would need a canvas to draw to <canvas id='testCanvas' width=256 height=256></canvas> Then with javascript
var ctx = document.getElementById('testCanvas').getContext('2d');
texture = new THREE.Texture();
// in the render loop
ctx.drawImage(Video,0,0);
var img = ctx.getImageData(0,0,c.width,c.height);
// do something with the img.data pixels, see
// this article http://www.html5rocks.com/en/tutorials/canvas/imagefilters/
// then write it back to the texture
texture.image = img;
texture.needsUpdate = true
Updated!
Actually, you can do it as an envMap, you just need to force the video to be a power of 2 with same width/height. Videos stream in to chrome as 640x480, so you still need to draw a canvas, but only to crop/square the image. So I got this to work:
// In the access camera part
var canvas = document.createElement('canvas')
canvas.width = 512;
canvas.height = 512;
ctx = canvas.getContext('2d');
// In render loop
ctx.drawImage(Video,0,0, 512, 512);
img = ctx.getImageData(0,0,512,512);
// This part is a little different, but env maps have an array
// of images instead of just one
cubeVideo.image = [img,img,img,img,img,img];
if (Video.readyState === Video.HAVE_ENOUGH_DATA)
cubeVideo.needsUpdate = true;
Try this:
var environment = new THREE.Texture( [ Video, Video, Video, Video, Video, Video ] );
var material = new THREE.MeshPhongMaterial({ envMap: environment });
// in animate()
environment.needsUpdate = true;
Okay, now I managed to get a shiny effect on the cube using a Phong material :
videoTexture = new THREE.Texture( Video );
var material = new THREE.MeshPhongMaterial({
map: videoTexture,
ambient: 0x030303,
specular: 0xc0c0c0,
shininess: 25
});
This looks not so bad.
But it seems that a THREE.Texture([Video,Video,Video,Video,Video,Video]); isn't working as an envMap. I still get a black cube.