I have a Google Doc with images. I would like to open a selected image in a page in another window (the google doc is a role playing game scenario and I want to show the image to my players on a second screen).
I have created a sidebar with a google script and I am able to show the selected image in this sidebar.
Now, I don't know how to open a new window (or connect a existing window) and send the image data to this window.
I start by trying to use the "PresentationRequest", but I have the error "PresentationRequest is not defined" on the init...
presentationRequest = new PresentationRequest('receiver.html');
My source :
https://developers.google.com/web/updates/2018/04/present-web-pages-to-secondary-attached-displays
For information (and if it helps someone) how I send the image to the sidebar page:
var doc = DocumentApp.getActiveDocument();
var selection = doc.getSelection();
if (selection) {
var elements = selection.getRangeElements();
var e = elements[0].getElement();
if (e.getType() == DocumentApp.ElementType.INLINE_IMAGE) {
var blobImg = e.asInlineImage().getBlob();
return 'data:' + blobImg.getContentType() + ';base64,' + Utilities.base64Encode(blobImg.getBytes());
}
}
The HTML code :
<!DOCTYPE html>
<html>
<head>
<base target="_top">
<style type="text/css">
.tailMax {
max-width: 260px;
max-height: 260px;
}
.centre {
display: block;
margin-left: auto;
margin-right: auto;
}
</style>
</head>
<body>
<form id="formJdr">
<div style="padding-bottom: 10px;">
<button type="button" id="btnAffImg" onclick="google.script.run.withSuccessHandler(afficheImg).selectImg()">Afficher</button>
<label id="lblImg">Sélectionnez une image</label>
</div>
<img id="img" class="tailMax centre"/>
</form>
<script>
function afficheImg(valeur) {
if (typeof value === "string"){
// Message
afficheMessage(valeur);
}
else {
try {
// Image to show
afficheMessage("");
document.getElementById("img").src = valeur;
}
catch(error) {
afficheMessage(error);
}
}
}
function afficheMessage(message) {
document.getElementById("lblImg").innerHTML = message;
}
</script>
</body>
</html>
I use a Chrome browser.
Do you think it is possible?
Modify your try statement as following:
try {
// Image to show
afficheMessage("");
var image=document.getElementById("img");
image.src = valeur;
var w = window.open("", '_blank');
w.document.write(image.outerHTML);
}
var w = window.open("", '_blank'); w.document.write(image.outerHTML); allows you to open a new window and then write the image as bytearray into it.
Ok, with the help of Ziganotschka, I update my javascript code for this.
Now I can change the image in the new window.
Just some improvements to make on the opening of this window and it will be good.
<script>
var affichage;
function afficheImg(valeur) {
if (typeof value === "string"){
afficheMessage(valeur);
}
else {
try {
afficheMessage("");
var image = document.getElementById("img");
image.src = valeur;
affichage.document.body.innerHTML = "";
affichage.document.write(image.outerHTML);
}
catch(error) {
afficheMessage(error);
}
}
}
function afficheMessage(message) {
document.getElementById("lblImg").innerHTML = message;
}
window.onload = function() {
affichage = window.open("", '_blank');
</script>
I have an HTML file, which I want to read and append as HTML. I have tried the below codes but these codes are not working.
Approach 1:
var file = "abc.html";
var str = "";
var txtFile = new File(file);
txtFile.open("r");
while (!txtFile.eof) {
// read each line of text
str += txtFile.readln() + "\n";
}
$('#myapp').html(str);
Approach 2:
var file = "abc.html";
var rawFile = new XMLHttpRequest();
alert('33333333');
rawFile.open("GET", file, false);
alert('44444');
rawFile.onreadystatechange = function () {
alert('5555555555');
if (rawFile.readyState === 4) {
alert('66666666666');
alert(rawFile.readyState);
if (rawFile.status === 200 || rawFile.status == 0) {
var allText = rawFile.responseText;
$('#myapp').html(allText);
alert(allText);
}
}
}
rawFile.send(null);
In Approach 2, it not going into the onreadystatechange method.
I thought another approach that I will use all the abc.html file content as a string variable and do similar $('#myapp').html(allText);, but this looks very bad approach because later I need to do the same for other 10-15 files. So Could you guys help me out?
Note: My application is running in offline mode means I cannot use the internet.
I have tried this solution, but its also not working.
It is not possible as JavaScript is frontend framework and it doesn't have access to local file system.
But you can do diffrent method.
-> you can serve that file in a local server and use http request with any backend framework.
I think you can adapt this pen to use as you wish:
https://codepen.io/alvaro-alves/pen/wxQwmg?editors=1111
CSS:
#drop_zone {
width: 100px;
height: 100px;
background: #000;
background-repeat: no-repeat;
background-size: 100px 100px;
opacity: 0.5;
border: 1px #000 dashed;
}
HTML:
<html>
<body>
<div id="drop_zone" ondrop="dropHandler(event);" ondragover="dragOverHandler(event);">
</div>
</body>
</html>
JS:
//drop handler do XML
function dropHandler(ev) {
ev.preventDefault();
var file, reader, parsed, emit, x, endereco;
if (ev.dataTransfer.items) {
for (var i = 0; i < ev.dataTransfer.items.length; i++) {
if (ev.dataTransfer.items[i].kind === 'file') {
file = ev.dataTransfer.items[i].getAsFile();
reader = new FileReader();
reader.onload = function() {
parsed = new DOMParser().parseFromString(this.result, "text/xml");
console.log(parsed);
};
reader.readAsText(file);
console.log('... file[' + i + '].name = ' + file.name);
}
}
}
removeDragData(ev)
}
function dragOverHandler(ev) {
ev.preventDefault();
}
function removeDragData(ev) {
if (ev.dataTransfer.items) {
ev.dataTransfer.items.clear();
} else {
ev.dataTransfer.clearData();
}
}
You will just to handle the result.
I need to emulate what an old manual typewriter does when printing what is being typed on a web page. I want to develop JavaScript functions to pass it a string, and it would print out each character with a delay, and the sound file synced with each letter.
I'm new to JavaScript. What is the preferred method to do this? Should I be looking at jQuery for this? Or is this something simple to do?
I've seen problems with sound files being triggered like this on some web browsers, is there an audio file format which is best for this sort of thing?
I've found this, but the problem is, it doesn't work on all web browsers:
https://rawgit.com/mehaase/js-typewriter/master/example3-typewriter/index.html
You can try something like this:
// The delay between each keystroke
var delay = 300;
// The typewriter sound file url
var clickURL = "https://cdn.rawgit.com/halimb/res/6ffa798d/typewriter.wav";
// Get a reference to the container div
var container = document.getElementById("container");
var sampleString = "Hello world!";
//get a reference to the start button and typewrite onclick
var start = document.getElementById("btn");
start.onclick = function() { typewrite( sampleString ); };
function typewrite( str ) {
var i = 0;
container.innerHTML = "";
type();
function type() {
var click = new Audio( clickURL );
// This is triggered when the browser has enough of the file to play through
click.oncanplaythrough = function() {
click.play();
// Add the character to the container div
container.innerHTML += str[i];
i++;
if(i < str.length) {
window.setTimeout(type, delay);
}
}
}
}
* {
font-family: Courier;
font-size: 32px;
}
.btn {
display: inline-block;
border: 1px solid black;
border-radius: 5px;
padding: 10px;
cursor: pointer;
margin: 10px;
}
<div class="btn" id="btn">Start</div>
<div id="container"></div>
Update: on Safari. It seems the audio has to be triggered by a user event (e.g: onclick..), so I added a button, and made the typewriter start onclick.
The downside is that there's no way to pre-load the audio file, Safari make a server request and downloads the audio each time it is played. the only (dirty) way I could think of to overcome this is to provide a data URI instead of the audioURL.. you can try that if the playback speed really matters (this can be helpful: https://www.ibm.com/developerworks/library/wa-ioshtml5/)
I have embeded PDF on my page for which, using "<iFrame>" I am calling a HTML page which contains the <Object> tag with in that there is an <embed> tag which embeds the PDF and a tag which shows up if there is not Adobe Reader installed.
On Firefox, Chrome and IE 11 if the there is a PDF reader installed, it will show only the PDF but when there is no reader istalled it shows the message in <p> tag "install the Adobe reader".
My Issue is :- in IE10, even if the Adobe reader is installed it shows the message "install the Adobe reader" in <p> tag. Please suggest how to hide the message if Adobe Reader is installed and the message should show only if PDF Reader is not installed.
Here is my CODE:
Iframe code from where PDF page is being called:
<div id="pdf">
<iframe id="pdfIframe" name="pdfIframe" src="pdfView.html" style="width: 100%; height: 100%;" scrolling="auto" frameborder="1">
Your browser doesn't support inline frames.
</iframe>
</div>
PDF page Code:
<body>
<style>
html, body, #blankPane {
height: 100%;
margin: 0;
padding: 0;
}
#blankPane p {
font-weight: bold;
line-height: 30px;
height: auto;
width: 98%;
margin: 0 auto;
color: #bc0000;
}
#blankPane * {
width: 100%;
height: 100%;
margin: 0;
padding: 0;
}
</style>
<div id="blankPane" class="overflowHidden">
<object data="lorem.pdf" type="application/pdf">
<p>
It appears you don't have Adobe Reader or PDF support in this web browser.
<br />
Click here to download the PDF OR Click here to install Adobe Reader
</p>
<embed id="pdfDocument" src="lorem.pdf" type="application/pdf" />
</object>
</div>
Please suggest!!!
You can detect which version of Adobe Acrobat that is installed with a javascript like the one below, or you can use FlexPaper to display your document if you prefer not to rely on Adobe Acrobat
var getAcrobatInfo = function() {
var getBrowserName = function() {
return this.name = this.name || function() {
var userAgent = navigator ? navigator.userAgent.toLowerCase() : "other";
if(userAgent.indexOf("chrome") > -1) return "chrome";
else if(userAgent.indexOf("safari") > -1) return "safari";
else if(userAgent.indexOf("msie") > -1) return "ie";
else if(userAgent.indexOf("firefox") > -1) return "firefox";
return userAgent;
}();
};
var getActiveXObject = function(name) {
try { return new ActiveXObject(name); } catch(e) {}
};
var getNavigatorPlugin = function(name) {
for(key in navigator.plugins) {
var plugin = navigator.plugins[key];
if(plugin.name == name) return plugin;
}
};
var getPDFPlugin = function() {
return this.plugin = this.plugin || function() {
if(getBrowserName() == 'ie') {
//
// load the activeX control
// AcroPDF.PDF is used by version 7 and later
// PDF.PdfCtrl is used by version 6 and earlier
return getActiveXObject('AcroPDF.PDF') || getActiveXObject('PDF.PdfCtrl');
}
else {
return getNavigatorPlugin('Adobe Acrobat') || getNavigatorPlugin('Chrome PDF Viewer') || getNavigatorPlugin('WebKit built-in PDF');
}
}();
};
var isAcrobatInstalled = function() {
return !!getPDFPlugin();
};
var getAcrobatVersion = function() {
try {
var plugin = getPDFPlugin();
if(getBrowserName() == 'ie') {
var versions = plugin.GetVersions().split(',');
var latest = versions[0].split('=');
return parseFloat(latest[1]);
}
if(plugin.version) return parseInt(plugin.version);
return plugin.name
}
catch(e) {
return null;
}
}
//
// The returned object
//
return {
browser: getBrowserName(),
acrobat: isAcrobatInstalled() ? 'installed' : false,
acrobatVersion: getAcrobatVersion()
};
};
Example of how to call these functions:
var info = getAcrobatInfo();
alert(info.browser+ " " + info.acrobat + " " + info.acrobatVersion);
I would like to record the users webcam and audio and save it to a file on the server. These files would then be able to be served up to other users.
I have no problems with playback, however I'm having problems getting the content to record.
My understanding is that the getUserMedia .record() function has not yet been written - only a proposal has been made for it so far.
I would like to create a peer connection on my server using the PeerConnectionAPI. I understand this is a bit hacky, but I'm thinking it should be possible to create a peer on the server and record what the client-peer sends.
If this is possible, I should then be able to save this data to flv or any other video format.
My preference is actually to record the webcam + audio client-side, to allow the client to re-record videos if they didn't like their first attempt before uploading. This would also allow for interruptions in network connections. I've seen some code which allows recording of individual 'images' from the webcam by sending the data to the canvas - that's cool, but I need the audio too.
Here's the client side code I have so far:
<video autoplay></video>
<script language="javascript" type="text/javascript">
function onVideoFail(e) {
console.log('webcam fail!', e);
};
function hasGetUserMedia() {
// Note: Opera is unprefixed.
return !!(navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia);
}
if (hasGetUserMedia()) {
// Good to go!
} else {
alert('getUserMedia() is not supported in your browser');
}
window.URL = window.URL || window.webkitURL;
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia;
var video = document.querySelector('video');
var streamRecorder;
var webcamstream;
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true, video: true}, function(stream) {
video.src = window.URL.createObjectURL(stream);
webcamstream = stream;
// streamrecorder = webcamstream.record();
}, onVideoFail);
} else {
alert ('failed');
}
function startRecording() {
streamRecorder = webcamstream.record();
setTimeout(stopRecording, 10000);
}
function stopRecording() {
streamRecorder.getRecordedData(postVideoToServer);
}
function postVideoToServer(videoblob) {
/* var x = new XMLHttpRequest();
x.open('POST', 'uploadMessage');
x.send(videoblob);
*/
var data = {};
data.video = videoblob;
data.metadata = 'test metadata';
data.action = "upload_video";
jQuery.post("http://www.foundthru.co.uk/uploadvideo.php", data, onUploadSuccess);
}
function onUploadSuccess() {
alert ('video uploaded');
}
</script>
<div id="webcamcontrols">
<a class="recordbutton" href="javascript:startRecording();">RECORD</a>
</div>
You should definitely have a look at Kurento. It provides a WebRTC server infrastructure that allows you to record from a WebRTC feed and much more. You can also find some examples for the application you are planning here. It is really easy to add recording capabilities to that demo, and store the media file in a URI (local disk or wherever).
The project is licensed under LGPL Apache 2.0
EDIT 1
Since this post, we've added a new tutorial that shows how to add the recorder in a couple of scenarios
kurento-hello-world-recording: simple recording tutorial, showing the different capabilities of the recording endpoint.
kurento-one2one-recording: How to record a one-to-one communication in the media server.
kurento-hello-world-repository: use an external repository to record the file.
Disclaimer: I'm part of the team that develops Kurento.
I believe using kurento or other MCUs just for recording videos would be bit of overkill, especially considering the fact that Chrome has MediaRecorder API support from v47 and Firefox since v25. So at this junction, you might not even need an external js library to do the job, try this demo I made to record video/ audio using MediaRecorder:
Demo - would work in chrome and firefox (intentionally left out pushing blob to server code)
Github Code Source
If running firefox, you could test it in here itself( chrome needs https):
'use strict'
let log = console.log.bind(console),
id = val => document.getElementById(val),
ul = id('ul'),
gUMbtn = id('gUMbtn'),
start = id('start'),
stop = id('stop'),
stream,
recorder,
counter = 1,
chunks,
media;
gUMbtn.onclick = e => {
let mv = id('mediaVideo'),
mediaOptions = {
video: {
tag: 'video',
type: 'video/webm',
ext: '.mp4',
gUM: {
video: true,
audio: true
}
},
audio: {
tag: 'audio',
type: 'audio/ogg',
ext: '.ogg',
gUM: {
audio: true
}
}
};
media = mv.checked ? mediaOptions.video : mediaOptions.audio;
navigator.mediaDevices.getUserMedia(media.gUM).then(_stream => {
stream = _stream;
id('gUMArea').style.display = 'none';
id('btns').style.display = 'inherit';
start.removeAttribute('disabled');
recorder = new MediaRecorder(stream);
recorder.ondataavailable = e => {
chunks.push(e.data);
if (recorder.state == 'inactive') makeLink();
};
log('got media successfully');
}).catch(log);
}
start.onclick = e => {
start.disabled = true;
stop.removeAttribute('disabled');
chunks = [];
recorder.start();
}
stop.onclick = e => {
stop.disabled = true;
recorder.stop();
start.removeAttribute('disabled');
}
function makeLink() {
let blob = new Blob(chunks, {
type: media.type
}),
url = URL.createObjectURL(blob),
li = document.createElement('li'),
mt = document.createElement(media.tag),
hf = document.createElement('a');
mt.controls = true;
mt.src = url;
hf.href = url;
hf.download = `${counter++}${media.ext}`;
hf.innerHTML = `donwload ${hf.download}`;
li.appendChild(mt);
li.appendChild(hf);
ul.appendChild(li);
}
button {
margin: 10px 5px;
}
li {
margin: 10px;
}
body {
width: 90%;
max-width: 960px;
margin: 0px auto;
}
#btns {
display: none;
}
h1 {
margin-bottom: 100px;
}
<link type="text/css" rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
<h1> MediaRecorder API example</h1>
<p>For now it is supported only in Firefox(v25+) and Chrome(v47+)</p>
<div id='gUMArea'>
<div>
Record:
<input type="radio" name="media" value="video" checked id='mediaVideo'>Video
<input type="radio" name="media" value="audio">audio
</div>
<button class="btn btn-default" id='gUMbtn'>Request Stream</button>
</div>
<div id='btns'>
<button class="btn btn-default" id='start'>Start</button>
<button class="btn btn-default" id='stop'>Stop</button>
</div>
<div>
<ul class="list-unstyled" id='ul'></ul>
</div>
<script src="https://code.jquery.com/jquery-2.2.0.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
Please, check the RecordRTC
RecordRTC is MIT licensed on github.
yes, as you understood, MediaStreamRecorder is currently unimplemented.
MediaStreamRecorder is a WebRTC API for recording getUserMedia() streams . It allows web apps to create a file from a live audio/video session.
alternatively you may do like this http://ericbidelman.tumblr.com/post/31486670538/creating-webm-video-from-getusermedia but audio is missing part.
You can use RecordRTC-together, which is based on RecordRTC.
It supports recording video and audio together in separate files. You will need tool like ffmpeg to merge two files into one on server.
Web Call Server 4 can record WebRTC audio and video to WebM container.
The recording is done using Vorbis codec for audio and VP8 codec for video.
Iniitial WebRTC codecs are Opus or G.711 and VP8. So, the server-side recording requires either Opus/G.711 to Vorbis server-side transcoding or VP8-H.264 transcoding if it is necessary to use another container, i.e. AVI.
For the record I also don't have enough knowledge about this,
But I found this on Git hub-
<!DOCTYPE html>
<html>
<head>
<title>XSockets.WebRTC Client example</title>
<meta charset="utf-8" />
<style>
body {
}
.localvideo {
position: absolute;
right: 10px;
top: 10px;
}
.localvideo video {
max-width: 240px;
width:100%;
margin-right:auto;
margin-left:auto;
border: 2px solid #333;
}
.remotevideos {
height:120px;
background:#dadada;
padding:10px;
}
.remotevideos video{
max-height:120px;
float:left;
}
</style>
</head>
<body>
<h1>XSockets.WebRTC Client example </h1>
<div class="localvideo">
<video autoplay></video>
</div>
<h2>Remote videos</h2>
<div class="remotevideos">
</div>
<h2>Recordings ( Click on your camera stream to start record)</h2>
<ul></ul>
<h2>Trace</h2>
<div id="immediate"></div>
<script src="XSockets.latest.js"></script>
<script src="adapter.js"></script>
<script src="bobBinder.js"></script>
<script src="xsocketWebRTC.js"></script>
<script>
var $ = function (selector, el) {
if (!el) el = document;
return el.querySelector(selector);
}
var trace = function (what, obj) {
var pre = document.createElement("pre");
pre.textContent = JSON.stringify(what) + " - " + JSON.stringify(obj || "");
$("#immediate").appendChild(pre);
};
var main = (function () {
var broker;
var rtc;
trace("Ready");
trace("Try connect the connectionBroker");
var ws = new XSockets.WebSocket("wss://rtcplaygrouund.azurewebsites.net:443", ["connectionbroker"], {
ctx: '23fbc61c-541a-4c0d-b46e-1a1f6473720a'
});
var onError = function (err) {
trace("error", arguments);
};
var recordMediaStream = function (stream) {
if ("MediaRecorder" in window === false) {
trace("Recorder not started MediaRecorder not available in this browser. ");
return;
}
var recorder = new XSockets.MediaRecorder(stream);
recorder.start();
trace("Recorder started.. ");
recorder.oncompleted = function (blob, blobUrl) {
trace("Recorder completed.. ");
var li = document.createElement("li");
var download = document.createElement("a");
download.textContent = new Date();
download.setAttribute("download", XSockets.Utils.randomString(8) + ".webm");
download.setAttribute("href", blobUrl);
li.appendChild(download);
$("ul").appendChild(li);
};
};
var addRemoteVideo = function (peerId, mediaStream) {
var remoteVideo = document.createElement("video");
remoteVideo.setAttribute("autoplay", "autoplay");
remoteVideo.setAttribute("rel", peerId);
attachMediaStream(remoteVideo, mediaStream);
$(".remotevideos").appendChild(remoteVideo);
};
var onConnectionLost = function (remotePeer) {
trace("onconnectionlost", arguments);
var peerId = remotePeer.PeerId;
var videoToRemove = $("video[rel='" + peerId + "']");
$(".remotevideos").removeChild(videoToRemove);
};
var oncConnectionCreated = function () {
console.log(arguments, rtc);
trace("oncconnectioncreated", arguments);
};
var onGetUerMedia = function (stream) {
trace("Successfully got some userMedia , hopefully a goat will appear..");
rtc.connectToContext(); // connect to the current context?
};
var onRemoteStream = function (remotePeer) {
addRemoteVideo(remotePeer.PeerId, remotePeer.stream);
trace("Opps, we got a remote stream. lets see if its a goat..");
};
var onLocalStream = function (mediaStream) {
trace("Got a localStream", mediaStream.id);
attachMediaStream($(".localvideo video "), mediaStream);
// if user click, video , call the recorder
$(".localvideo video ").addEventListener("click", function () {
recordMediaStream(rtc.getLocalStreams()[0]);
});
};
var onContextCreated = function (ctx) {
trace("RTC object created, and a context is created - ", ctx);
rtc.getUserMedia(rtc.userMediaConstraints.hd(false), onGetUerMedia, onError);
};
var onOpen = function () {
trace("Connected to the brokerController - 'connectionBroker'");
rtc = new XSockets.WebRTC(this);
rtc.onlocalstream = onLocalStream;
rtc.oncontextcreated = onContextCreated;
rtc.onconnectioncreated = oncConnectionCreated;
rtc.onconnectionlost = onConnectionLost;
rtc.onremotestream = onRemoteStream;
rtc.onanswer = function (event) {
};
rtc.onoffer = function (event) {
};
};
var onConnected = function () {
trace("connection to the 'broker' server is established");
trace("Try get the broker controller form server..");
broker = ws.controller("connectionbroker");
broker.onopen = onOpen;
};
ws.onconnected = onConnected;
});
document.addEventListener("DOMContentLoaded", main);
</script>
On Line number 89 in my case code OnrecordComplete actually append a link of recorder file, if you will click on that link it will start the download, you can save that path to your server as a file.
The Recording code looks something like this
recorder.oncompleted = function (blob, blobUrl) {
trace("Recorder completed.. ");
var li = document.createElement("li");
var download = document.createElement("a");
download.textContent = new Date();
download.setAttribute("download", XSockets.Utils.randomString(8) + ".webm");
download.setAttribute("href", blobUrl);
li.appendChild(download);
$("ul").appendChild(li);
};
The blobUrl holds the path. I solved my problem with this, hope someone will find this useful
Currently the browsers support recording on the client side.
https://webrtc.github.io/samples/
One can push the recorded file after the connection has been ended to server by uploading through some HTTP requests.
https://webrtc.github.io/samples/src/content/getusermedia/record/
https://github.com/webrtc/samples/tree/gh-pages/src/content/getusermedia/record
This has some kind of drawbacks that in case if the user just closes the tab and don't run these operations in the backend side, it may not uploaded the files fully to the server.
As a more stable solution Ant Media Server can record the stream on server side and recording functionality is one of the basic feature of Ant Media Server.
antmedia.io
Note: I'm a member of Ant Media team.
Technically you can use FFMPEG on backend to mix video and audio