How can I get direct video url from YouTube video URL? - javascript

I'm going to play YouTube and Vimeo videos using React-Native.
For now I implemented React-Native-Video and it does not support to play YouTube or Vimeo urls while can play direct video urls.
So I just want to get a direct reference to a file (or a stream?) from YouTube Video URL.
e.g from
https://www.youtube.com/watch?v=PCwL3-hkKrg
Is there any good third party services or websites to solve this problem?
No matter what types of code or ways you will use though I'm using React-Native to play YouTube videos currently.

Finally, I was able to get youtube video file reference url from KeepVid website.
Because I don't think the website supports restful api, I fetched its html content and parsed to get download urls.
I didn't write code here as it is very simple.
I would be glad if it helped you.

You can obtain video urls of YouTube by this code:
// ES6 version
const videoUrls = ytplayer.config.args.adaptive_fmts
.split(',')
.map(item => item
.split('&')
.reduce((prev, curr) => (curr = curr.split('='),
Object.assign(prev, {[curr[0]]: decodeURIComponent(curr[1])})
), {})
)
.reduce((prev, curr) => Object.assign(prev, {
[curr.quality_label || curr.type]: curr
}), {});
console.log(videoUrls);
// ES5 version
var videoUrls = ytplayer.config.args.adaptive_fmts
.split(',')
.map(function (item) {
return item
.split('&')
.reduce(function (prev, curr) {
curr = curr.split('=');
return Object.assign(prev, {[curr[0]]: decodeURIComponent(curr[1])})
}, {});
})
.reduce(function (prev, curr) {
return Object.assign(prev, {
[curr.quality_label || curr.type]: curr
});
}, {});
console.log(videoUrls);

I can understand what you want - downloading or playing a video[stream] from URL of Youtube or Vimeo.
The only possible way is to use WebView since it's not possible to get the link to the regarding file (not "direct url") of Vimeo nor YouTube for playing (maybe possible for downloading).
If you have to compose customized control, e.g. with your own SeekBar or buttons, ... then you can implement it by combining with WebView control, right?
I don't expect this will be much trouble to implement.

Related

Streaming Audio Using Fetch

I'm working on a side project that is a music app. This requires me to play large files. I've figured out that there are a few solutions. One is to download the music to local storage and use that whenever it is available. I can also stream music over the network. I have been unsuccessful as it relates to the latter.
My question is, how do I play from a audio from a stream?
This is the code I have so far:
fetch(audio)
.then((res) => {
const reader = res.body.getReader()
const audioObj = new Audio()
reader.read().then(function processAudio({ done, value }) {
if (done) {
console.log("You got all the data")
return
}
/// DO SOMETHING WITH VALUE
return reader.read().then(processAudio)
})
})
.catch((err) => {
console.log(`Something went wrong ${err}`)
})
I have tried looking at other APIs within the browser, eg. MediaStream
Any help is much appreciated. Thank you.
There is a WebAudio API that might be useful. It appears to give you exactly what you need.
Web Audo API

React Native Audio Playing mp3 files with spaces in their names

I have been trying to build a a media player in react native using Expo to be able to play audio on my music project.
I have successfully hacked one together with the preferred design etc but I still have a minor bug. Here, I receive information from an API Endpoint with Links to files stored in a server. this audios play when the filename is just one word. When there are spaces in the name, the file, it does not play. eg .../musics/test.mp3 plays while .../musics/test 32.mp3 does not play. Any idea on how to handle this issue in React native will be highly appreciated. My play function
startPlay = async (index = this.index, playing = false) => {
const url = this.list[index].url;
this.index = index;
console.log(url);
// Checking if now playing music, if yes stop that
if(playing) {
await this.soundObject.stopAsync();
} else {
// Checking if item already loaded, if yes just play, else load music before play
if(this.soundObject._loaded) {
await this.soundObject.playAsync();
} else {
await this.soundObject.loadAsync(url);
await this.soundObject.playAsync();
}
}
};
url is the link to the file .
I am working on a streaming platform and I will love to get a player similar to this:
Something like this https://hackernoon.com/building-a-music-streaming-app-using-react-native-6d0878a13ba4
But I am using React native expo. All the implementations I have come across online are using native without expo. Any pointers to any already done work on this using expo will be of great help eg packages .
thanks
The urls should be encoded:
const uri = this.list[index].url;
this.index = index;
const url = encodeURI(uri);
console.log(url);
The uri = "../musics/test 32.mp3" will be encoded to url = "../musics/test%2032.mp3"

Getting number of audio channels for an AudioTrack

I have a video element, with data being added via MSE. I'm trying to determine how many audio channels there are in each track.
The AudioTrack objects themselves don't have a property with this information. The only way I know to go about it is to use the Web Audio API:
const v = document.querySelector('video');
const ctx = new OfflineAudioContext(32, 48000, 48000);
console.log(Array.from(v.audioTracks).map((track) => {
return ctx.createBufferSource(track.sourceBuffer).channelCount;
}));
For a video with a single mono track, I expect to get [1]. For a video with a single stereo track, I expect to get [2]. Yet, every time I get [2] no matter what the channel count is in the original source.
Questions:
Is there a proper direct way to get the number of channels in an AudioTrack?
Is there something else I could be doing with the Web Audio API to get the correct number of channels?
I stumbled upon an answer for this that seems to be working. It looks like by using decodeAudioData we can grab some buffer data about a file. I built a little function that returns a Promise with the buffer data that should return the correct number of channels of an audio file:
function loadBuffer(path) {
return fetch(path)
.then(response => response.arrayBuffer())
.then(
buffer =>
new Promise((resolve, reject) =>
audioContext.decodeAudioData(
buffer,
data => resolve(data),
err => reject(err)
)
)
)
}
Then you can use it like this:
loadBuffer(audioSource).then(data => console.log(data.numberOfChannels))
Might be best to store and reuse the data if it can be called multiple times.

How to create or convert text to audio at chromium browser?

While trying to determine a solution to How to use Web Speech API at chromium? found that
var voices = window.speechSynthesis.getVoices();
returns an empty array for voices identifier.
Not certain if lack of support at chromium browser is related to this issue Not OK, Google: Chromium voice extension pulled after spying concerns?
Questions:
1) Are there any workarounds which can implement the requirement of creating or converting audio from text at chromium browser?
2) How can we, the developer community, create an open source database of audio files reflecting both common and uncommon words; served with appropriate CORS headers?
There are several possible workarounds that have found which provide the ability to create audio from text; two of which require requesting an external resource, the other uses meSpeak.js by #masswerk.
Using approach described at Download the Audio Pronunciation of Words from Google, which suffers from not being able to pre-determine which words actually exist as a file at the resource without writing a shell script or performing a HEAD request to check if a network error occurs. For example, the word "do" is not available at the resource used below.
window.addEventListener("load", () => {
const textarea = document.querySelector("textarea");
const audio = document.createElement("audio");
const mimecodec = "audio/webm; codecs=opus";
audio.controls = "controls";
document.body.appendChild(audio);
audio.addEventListener("canplay", e => {
audio.play();
});
let words = textarea.value.trim().match(/\w+/g);
const url = "https://ssl.gstatic.com/dictionary/static/sounds/de/0/";
const mediatype = ".mp3";
Promise.all(
words.map(word =>
fetch(`https://query.yahooapis.com/v1/public/yql?q=select * from data.uri where url="${url}${word}${mediatype}"&format=json&callback=`)
.then(response => response.json())
.then(({query: {results: {url}}}) =>
fetch(url).then(response => response.blob())
.then(blob => blob)
)
)
)
.then(blobs => {
// const a = document.createElement("a");
audio.src = URL.createObjectURL(new Blob(blobs, {
type: mimecodec
}));
// a.download = words.join("-") + ".webm";
// a.click()
})
.catch(err => console.log(err));
});
<textarea>what it does my ninja?</textarea>
Resources at Wikimedia Commons Category:Public domain are not necessary served from same directory, see How to retrieve Wiktionary word content?, wikionary API - meaning of words.
If the precise location of the resource is known, the audio can be requested, though the URL may include prefixes other than the word itself.
fetch("https://upload.wikimedia.org/wikipedia/commons/c/c5/En-uk-hello-1.ogg")
.then(response => response.blob())
.then(blob => new Audio(URL.createObjectURL(blob)).play());
Not entirely sure how to use the Wikipedia API, How to get Wikipedia content using Wikipedia's API?, Is there a clean wikipedia API just for retrieve content summary? to get only the audio file. The JSON response would need to be parsed for text ending in .ogg, then a second request would need to be made for the resource itself.
fetch("https://en.wiktionary.org/w/api.php?action=parse&format=json&prop=text&callback=?&page=hello")
.then(response => response.text())
.then(data => {
new Audio(location.protocol + data.match(/\/\/upload\.wikimedia\.org\/wikipedia\/commons\/[\d-/]+[\w-]+\.ogg/).pop()).play()
})
// "//upload.wikimedia.org/wikipedia/commons/5/52/En-us-hello.ogg\"
which logs
Fetch API cannot load https://en.wiktionary.org/w/api.php?action=parse&format=json&prop=text&callback=?&page=hello. No 'Access-Control-Allow-Origin' header is present on the requested resource
when not requested from same origin. We would need to try to use YQL again, though not certain how to formulate the query to avoid errors.
The third approach uses a slightly modified version of meSpeak.js to generate the audio without making an external request. The modification was to create a proper callback for .loadConfig() method
fetch("https://gist.githubusercontent.com/guest271314/f48ee0658bc9b948766c67126ba9104c/raw/958dd72d317a6087df6b7297d4fee91173e0844d/mespeak.js")
.then(response => response.text())
.then(text => {
const script = document.createElement("script");
script.textContent = text;
document.body.appendChild(script);
return Promise.all([
new Promise(resolve => {
meSpeak.loadConfig("https://gist.githubusercontent.com/guest271314/8421b50dfa0e5e7e5012da132567776a/raw/501fece4fd1fbb4e73f3f0dc133b64be86dae068/mespeak_config.json", resolve)
}),
new Promise(resolve => {
meSpeak.loadVoice("https://gist.githubusercontent.com/guest271314/fa0650d0e0159ac96b21beaf60766bcc/raw/82414d646a7a7ef11bb04ddffe4091f78ef121d3/en.json", resolve)
})
])
})
.then(() => {
// takes approximately 14 seconds to get here
console.log(meSpeak.isConfigLoaded());
meSpeak.speak("what it do my ninja", {
amplitude: 100,
pitch: 5,
speed: 150,
wordgap: 1,
variant: "m7"
});
})
.catch(err => console.log(err));
one caveat of the above approach being that it takes approximately 14 and a half seconds for the three files to load before the audio is played back. However, avoids external requests.
It would be a positive to either or both 1) create a FOSS, developer maintained database or directory of sounds for both common and uncommon words; 2) perform further development of meSpeak.js to reduce load time of the three necessary files; and use Promise based approaches to provide notifications of the progress of of the loading of the files and readiness of the application.
In this users' estimation, it would be a useful resource if developers themselves created and contributed to an online database of files which responded with an audio file of the specific word. Not entirely sure if github is the appropriate venue to host audio files? Will have to consider the possible options if interest in such a project is shown.

Get youtube file URL from video ID using API

I want to reproduce all youtube files embedded on a site using SoundManager2 audio player. How can I get access to Youtube file URL from video ID?
Note: Soundcloud offers the possibility to get the direct streaming URL from the song URL. Here is a working example with Soundmanager2 and Angular. Now I want to do the same for Youtube.
Your title said using API. It's not possible to get access to YouTube file URL from video ID by that. By mean, with file URL you can also download it, distribute content etc while it's against YouTube Terms. So there is no such API for that.
Since you're tagged with Javascript, there is possibility to get direct access to the file URL. This probably is not as you wanted exactly but it worked flawlessly from the developer tools console available in any browser. For details Here you go
const videoUrls = ytplayer.config.args.url_encoded_fmt_stream_map
.split(',')
.map(item => item
.split('&')
.reduce((prev, curr) => (curr = curr.split('='),
Object.assign(prev, {[curr[0]]: decodeURIComponent(curr[1])})
), {})
)
.reduce((prev, curr) => Object.assign(prev, {
[curr.quality + ':' + curr.type.split(';')[0]]: curr
}), {});
console.log(videoUrls);
One slightly hacky way - You could use Youtube's OEmbed API, however you need to provide a URL, so you'd need to add the ID onto the end.
https://www.youtube.com/oembed?url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DiwGFalTRHDA
This will consistently return an embed iframe, which you could regex the URL out of.
var regex = /<iframe.*?src='(.*?)'/;
var src = regex.exec(str)[1];

Categories