I'm working on a project in PHP, where from a library of tracks, a user can select a track, choose a snippet from that track of a few seconds in length, and use that snippet as a search parameter on the next page.
The client would like a waveform of the track to be displayed (these have already been generated as PNG files), with a 'playhead' showing the current playback position (should be pretty easy to do). The user should be able to select a start and endpoint for their selection by dragging vertical bars, audition the selected snippet (by pressing the space bar, or similar), and then click 'Search' to submit an HTML form. The only parameters I actually need from the form are the start and end positions of the selected audio snippet.
So, this is what the client wants, and I'm at the brainstorming phase. So far, I have a bunch of mp3 files and corresponding waveform graphics. In terms of browser support, it's going to be OK to specify that the browser must be 'the latest version of...', but I would like to offer support for all the big names: Safari, Firefox, Opera, IE, Chrome.
Do you have any suggestions of JavaScript libraries or solutions I should consider to help with implementation: embedding the audio file, the playback control interface, and the snippet selection interface. Although it might be possible to generate corresponding OGG files for all the mp3s (to help with an HTML5 implementation), I'd prefer not to if possible, as it complicates things. So ideally I'd like an mp3-only solution which offers cross-browser support. Perhaps something like jPlayer would be a possibility?
I'm certainly familiar with jQuery, so using that would be a bonus.
Are there any existing libraries I could use which might help me with the 'snippet selection' interface? In a totally ideal world, it would be a solution with a 'scrub bar' - i.e. as you drag the start and endpoint handles, the sound at the 'playhead' is previewed instantly.
Many thanks for any ideas and suggestions!
EDIT (more information):
I'm hoping to be able to create something similar to this demo:
http://www.happyworm.com/jquery/jplayer/latest/demo-07.htm except with
the added feature that a user can select a 'snippet' from the track,
using start- and end-point scrub bars, and can audition the snippet by
pressing play.
So, my desired interface is like this:
on page load, the track would start buffering, and the play button
would play from the beginning as normal if pressed
if the user drags a scrub bar from the far left edge of the track,
the sound at the playhead would be previewed (as with the demo I
linked to), and on releasing the scrub bar, playback would continue
from there to the end of the track
if the user drags another scrub bar from the far right of the track,
the sound at the playhead would again be previewed. However, as the
bar is released, the playback should jump back to the LEFT scrub bar,
and continue from there. (If this is too difficult to achieve, I'd be
happy with the right-hand scrub bar to not preview the sound as it was
dragged, but just to act as a 'marker' for the end of the selection).
the two scrub bars cannot pass each other, so the left one is always
the 'start' and the right is 'end' of the selection.
once the bars have been moved and a selection has been made, the
playback should loop the selected snippet indefinitely, until 'stop'
is pressed.
in an ideal world, I'd like the scrub bars to 'snap' to a grid of
whole seconds, so that the snippet start, end and length values are
always a whole number of seconds. Failing that, I'll just round the
numbers to the nearest integer later in JavaScript.
For the selection:
http://docs.jquery.com/UI/Slider
As for JavaScript doing a playback scrubbing, good luck with that... Sounds more like a flash or html5 thing
Related
I have a javascript canvas game with pixi.js that requires a player to press a certain combination of buttons to complete a level. They basically have to press the button that matches a certain color.
It turns out that players are writing bots in python to do this task and they are getting the maxium score each time. The game is already live and users enjoy playing it so I can't really change anything gameplay wise.
So I thought about a few possible solutions but I have some concerns
Captcha between each level
Check the speed of the input
Check how consistent the input is
The captcha will hurt user experience, and there are tons of video's how to bypass it. 2 and 3 will fail after the creators of the bots understand what is happening. So I am really stuck on what I could do.
I would consider a random grace period before allowing the buttons to be clicked. this may stump some bots, but is circumventable.
Besides that, I would profile the timing of the clicks/interactions. Every time next level is requested, compare to the profile, and if they are consistently the same introduce a randomized button id, button shape (circle, oval, square, etc.), button placement (swap buttons) to avoid easy scripting. Also the font and the actual text could be varied.
I would also change the input element to <input type="image"> since it will give you the exact coordinates (if possible - I'm not familiar with pixi.js) and this will aid in the profiling.
You could also implement some sort of mouse position tracker, but people on touchscreens will not produce data for this. You could supplement with additional check if the user input is touch, but a bot would easily be able to circumvent it.
EDIT
I don't know if some library to detect other JavaScript imports and thereby detecting potential bots would be applicable. Might be one avenue to consider.
Doing something like this: Check whether user has a Chrome extension installed to verify that you are running in a browser and not in a python environment could be another avenue. It would mean that you restrict your users to certain browsers, and as a lot of other code, could be circumvented. Cost/benefit should be kept in mind here.
If everything is being run though the actual browser with some sort of headless interface it is not going to be useful at all.
EDIT 2
A quick googling of python automate browser game brings up a tutorial of how to automate browser games with python. based on a cursory glance, making your buttons move around and changing font would be effective, and even resizing the playing area "randomly" (even if you have a full screen function) may be a viable defense. Again, following the tutorial and trying to automate it using that, and seeing how to block it would be a good exercise.
You could also consider asking some students for help. This could be a good project idea for many computer studies courses that offer project based courses. It could also be a student job type deal - if you want to ensure that you get a result and a "report".
I think your approach is valid. It seems a bit excessive to add Capcha between each level, perhaps add it before the game starts.
It might be a good idea to check interval between individual clicks, and define some threshold when you can safely assume that it was a bot who clicked the button.
Another approach you could take is to make it more complicated to look up the correct buttons. Approaches like randomizing the element IDs, not rendering the label inside the buttons but as separate elements (I assume it is a game with some fixed window size and you don't care about mobile that much).
I am not familiar with Pixi.js, but that could be an approach to consider.
----------------------- Edit -----------------------
What if you run your game in an iframe ?
We are creating an app that lets a user capture a number of images and it will try to create a 3D model of the target object. In order to help the users capture useful images we give them some guidance while they move their phone from one capture to the next.
We have a nice prototype working by means of navigator.mediaDevices.getUserMedia() that captures video, displays it in a <video> element, and has an overlay that shows how to move the phone. When they are ready they press a button and we grab the current frame of the streamed video.
We were quite happy with this until we realized that very often the captured image would not have enough quality; mainly they tend to be a bit blurred because the user may not hold the device totally still. This causes the math behind creating the 3D model to fail.
I am now tasked with attempting to improve this but I think I don't have many options. Here is what I have been investigating and their drawbacks:
JavaScript's ImageCapture API. This seems to be exactly what we need: a way to actually take a picture instead of grabbing a frame from a video. While the API has still an experimental status, it seems pretty stable and Chrome has it implemented since version 59. The problem is that Safari (our main target) does not have it implemented and it seems they won't ever do. I can't really find information on what their plan is though but as of today, this is not an option.
Use the input element of type file with the attribute capture. While this lets me capture images with the native camera, I cannot give the user any guide as far as I know.
Create a whole mobile app. This requires another year of work and requesting our existing users to install an app, which may not be possible. Also leaves Android devices out which we'd prefer not to.
While typing this I thought of perhaps using the video instead of capturing the images, but not sure this would help in any way.
Instead of a different approach to the way of capturing the image, I could try to only grab the image if I can confirm that the device is as close as still as possible (using a threshold value). Perhaps I could use the gyroscope for this (we are using it to check they have moved the device to a place and angle we consider useful for the process). The drawback of this is that I am not sure it would really mitigate our problem... how still is still enough? is it possible for the person to be that still for a second?
So my question here is, can anyone think of another alternative to those I descrived? or perhaps improve one of the enumerated ones?
BTW does anyone know what are Apple's plans for the ImageCapture API?
I have a web application used for virtual house tours. Currently I am using VRView for these tours and it has worked pretty good, however I’ve ran in to an issue with the gyroscope that I need fixed as soon as possible.
VRView will automatically rotate the camera based on a users device orientation. As a user turns their phone, the virtual house tour will also turn, so the user is able to “look around” the house. This is great for most use cases, however lower end devices have issues when processing this sort of change. I need a way for users to disable the automatic rotation, and simply swipe on their phones to look around.
I’ve tried the permissions api and trying to revoke access to gyroscope, but due to browser compatibility with that api, it doesn’t work. I also can’t find any documentation on this in the VRView library. Any help is much appreciated.
tldr;
You're right, this doesn't seem to be available via their API. It looks like you may have to fork the library and make some adjustments. If you want to go down this path, I'd suggest forking the repo, seeing if you can successfully disable the motion emitter, and then see if you can use the webvr-polyfill to initiate drag controls. It may also be possible to just disable the gyro-based rotation via webvr-polyfill directly.
More in-depth explanation:
The motion information is being published to the VR View iframes (which I believe then feed them to the webvr-polyfill controls) in two locations:
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/scripts/js/device-motion-sender.js#L35
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/api/iframe-message-sender.js#L45
When a browser's UA (user agent) flag indicates it can't use gyro controls, you would need to include a flag in that disables this functionality (or disables the listener in the iframe).
Normally, to enable drag rotation, I think you would then need to write a listener for the start and end of drag events that would translate those events into camera rotation. (Something similar to what this person is suggesting: https://github.com/googlearchive/vrview/issues/131#issuecomment-289522607)
However, it appears that the controls are imported via web-vr-polyfill. The 'window.WebVRConfig' object is coming from web-vr-polyfill, if I'm following this correctly.
See here: https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/embed/main.js#L77
The code above looks like VR View is adjusting the WebVRConfig when it detects a certain flag (in this case the 'YAW_ONLY' attribute). I believe you would have to do something similar.
https://github.com/immersive-web/webvr-polyfill
See here for the YAW_ONLY attribute: https://github.com/immersive-web/webvr-polyfill/blob/e2958490653bfcda4df83881d554bcdb641cf45b/src/webvr-polyfill.js#L68
See here for an example adjusting controls in webvr-polyfill:
https://github.com/immersive-web/webvr-polyfill#using
I have followed a couple of tutorials (http://www.adobe.com/devnet/html5/articles/javascript-motion-detection.html, http://www.html5rocks.com/en/tutorials/canvas/notearsgame/) and spliced the two together to create a game (https://github.com/gazzwi86/HTML5-Motion-Detection). While I have a few things to work out with the blending to improve the quality of the detection, I was wondering how I would go about detecting grabbing and swipe gestures, say for navigating a web page.
Could you point me in the direction of some examples or outline the principles so that I may try it myself.
I wouldn't go for it. It would require huge processing on client side to be quite good detection.
You can simply track moving objects(like hand) with some threshold(you can simply blur to get rid of noise). The background of user mostly will stay the same, so you can ignore it too.
Then convert image to black and white and try to have your moving object as one polygon.
What I would go to experiment after - set up a little neural network and train it myself by moving my hand.
Well that's just my 2 cents on how I would try to implement it. It would be really nice to hear from you later how did you do that and what the results are :)
Let's say I can fetch a total time of an certain video in seconds. I can also get the starting times of sub-events in that video.
Is there a way to output some kind of timeline with sub-events arranged by their starting times respectively to the end time of the video?
The sub-events should also be linked to the video in such way that when I click on them, video would jump on beginning of that sub-event.
Basically, I would like to hide the timeline of html5 video, and stick a new dynamically generated one under the video with few selected events to navigate through, with no slider, just click (for example a tennis match with set and game timeline instead of min:sec timeline).
I'm using video in html5 and javascript.
Well you can user this jquery plungin for timeline. Would look cool.
And for jumping you can read here.