I've Protocol buffer (protoBuf) files which has models (c# class) and WADL files which has functionality for the WebAPIs controllers (controllers and methods in xml format). Now, I want to convert both (protobuf and wadl) it to Json standard format in 1 file. This json can be used by swaggerUI, SoapUI or any other tool which take json. And, when one modifies the Json in Swagger then it must also update corresponding protobuf and wadl.(I know it seems lot of work here, any easy way?) Can any one please let me know, How do I make it?
Related
I am new to XML. I have an XML document that I am inserting data manually. I wanted to know if it is possible to include an image in an XML file and not by using the file path. I have found something about encoding but I do not understand how this work and the option is not even available in the XML editor. After storing the images in the XML file, I will access it using javascript. Please provide further information on this matter.
An image is binary data, and the usual way to store binary data in an XML document is by encoding it in base64 (which turns it into ASCII characters). Libraries to convert from binary to base64, and back, are widely available, but the details depend very much on your programming environment. There are also online services where you can upload an image and get back its base64 representation: an example is here https://www.base64encode.net/base64-image-encoder
BigchainDB support .json file format. Is there any possible way to store digital asset like images, documents?
You can convert any file (seen as a sequence of bits) into JSON, for example, by converting it to base64, which can then be included as the value in a JSON document.
While that is possible, it's not a good idea. BigchainDB is more for storing metadata (which you might want to search or query), not for storing big files.
BigchainDB allows interoperability with IPFS. Use the unique hash returned by the IPFS, as when files are added to it, to store as an asset information or as a metadata of the asset in BigchainDB which then can be queried to view the file from IPFS.
I am not sure if the title is appropriate description of what i intend to do. However, below is the url from where I want to parse the csv file in python (the csv handle is visible on the top right corner of the interactive table).
https://www.mcxindia.com/market-data/bhavcopy
I have parsed files before using Requests and lxml but in those cases the address (or location) of the csv file was rather straightforward. In this case, I am not able to ascertain the actual url location of the file. Although rudimentary, my assessment is that it is embedded in javascript code. My question is whether I can indeed parse files such as this? if yes, how usingrequests and lxml
This is public data and a very inefficient alternative is to download the data daily and than parse the locally stored csv file but that is no automation. Any suggestion on how can i automate this task will be very valuable.
I'm working on a small project where I'm going to read some parameters from a SQLite database. The data is generated from a Linux server running a C code. I then want to create a script using JavaScript to fetch the data from the database. I've tried with alasql.js but it takes very long time (~1 minute) before I get the list with the parameters from two tables.
SELECT sensors.id, sensors.sensorname, sensors.sensornumber, information.sensorvalue, information.timestamp FROM sensors INNER JOIN information ON sensors.id=information.sensorid
I've been reading about IndexedDB but seems like it only works with JavaScript but not with C-code. Please, correct me if I'm wrong. The clue here is that I want a database that supports writing to database from C-code and reading from database from JavaScript. The database can be read either from file:// schema or an IP address.
Would appreciate any help regarding this problem. Thanks!
Unfortunately, SQLite internal file format is complicated to fast parsing with JavaScript. One of the reasons, that the only browser side library which can read it is SQL.js and it is relatively slow, because it can not read data by selected pages from the database file, but only the whole file.
One of the options: you can switch from SQLite format to CSV or TSV plain tet formats, which can be eaily and quickly send to the browser and be parsed with jQuery.CSV, PapaParse, AlaSQL or any other CSV parsing libraries, like:
alasql('SELECT * FROM TSV("mydata.txt",{headers:true})',[],function(data){
// data
});
Another alternative: you can write simple server on node.js with native support of SQLite and then provide requested records in JSON format with ajax to your application (like in this article)
I want to load an external file using AJAX GET and then parse it for the relevant information on it leaving out all the comments.
file: stuff.conf
: This is the list
: of colors needed
#5d3939 : nice
#9e1818 : ugly!
#cd7979
#409c81
#6e6f14 : ok...
I want the hex colours in an array.
Please help!
Here you go:
var arr = response.match(/\#[a-f0-9]{6}/gi);
where response is your Ajax response string.
Live demo: http://jsfiddle.net/simevidas/RnPS3/1/
You can write your own JS to parse any type of data format. But, the somewhat standard way to interchange data like this with the least hassle is to put the data in the JSON format with the color values in an array (or whatever format you want them to end up on). You then read the contents of the file into a string variable and then call a JSON parser. The return value of the parser will be an array of color values (if that's how you format the JSON). The latest browsers have JSON parsers built in. For cross-browser compatibility with older browsers, you can either use a parser in one of the common libraries like jQuery or YUI or find code on the web to add just a JSON parser.