Originally I've made a website that can retrieve data from Azure SqlDB, but the requirement has changed, my website is expected to read those Blob JSON, also, those JSON files are stored in different Azure folders.
Know nothing about JavaScript
Related
I am trying to build a Image Upload System with Node and Azure Blob Storage. I have decided to use the Azure Blob Browser JS SDK to upload image . I have a dilemma in the above process .How do I tell the server of the blob name . I thought of the following approach but this has several problems:
Let just say I give the blob a uuid and send the same uuid to the server but the client JS can be changed and the real blob name and the one sent to the server can differ
It might be that my approach is completely trash. Please reply ,I am a newbie to web development
If I understand your question, I’m doing something similar in ASP.NET and SQL where I actually store the blob itself in Azure storage BUT in a SQL table it’s really just a row of meta-data, where each image has an internal numerical seed ID, a “user-defined” name (which does not have to be unique across the system), and the GUID for e blob name stored in Azure.
Recently, I found that there is called 'LowDB' and it can control the json file with NodeJS.
Actually, I can use MySQL or other databases but I think the App that I developing now is small application so it needs very tiny DB like simple Json.
This Link is connected to lowDB example.
Link
https://github.com/typicode/lowdb
As you can see it controls json file and it includes CRUD(Create, Read, Update, Delete) function. But the data are save in Local storage not in server. So Even I control the json files it will only apply in Local Json file. I want to save it in server.
How can I manage json file with NodeJS? Please give me some keywords or introduce some Node Dependencies.
I have a web page that usually suppose to work offline.(without internet connection).
Once a while it's need to connect to the web and grab some data to be used offline.
I'm searching for a way to store the data locally while it connected and still have an access to the data offline.
I checked local storage and the FileSystem-API but both are follows the Same Origin Policy.
Any suggestion will be appreciate
When I was creating offline application to sync with online version I had some JSON file with required information instead of LocalStorage.
Work flow:
User requests new files to be generated (a.k.a. sync with server) using some online interface.
Generate JSON file with needed data and save it along offline files.
User downloads new files and replaces it with old ones.
Offline JS reads JSON file and gets all information.
We were using some JAVA installer (launch4j to generate .jar files and IzPack to make installer)
I have a web application with a Javascript Frontend and a Java Backend.
I have a Use Case where the users can upload their profile pictures. The Use Case is simple: send the image from the user browser and store it in S3 with a maximum file size. Currently the Frontend sends the image data stream to the Backend, and this the stores the image in S3 using the AWS Java SDK.
The Backend is obliged to previously store this image in a file system in order to know the image file size (and avoid reading more bytes than a certain maximum), since it is required by S3 to send the PUT Object Request along with the file Content Length.
Is there any other way I can do this with AWS? Using another service? maybe Lambda? I don't like this method of having to previously store the file in a file system and then open again a stream to send it to S3.
Thank you very much in advance.
You might get the file size on the client side as mentioned here but consider browser support.
You shouldn't share your keys with the client side code. I believe Query String Authentication should be used in this scenario.
Assuming your maximum file size is less than your available memory, why can't you just read it in a byte[] or something similar where you can send it to S3 without writing it to disk. You also should be able to get the size that way.
Depends on the S3 java client you are using. If it allows you to read from ByteArrayInputStream then you should be able to do whatever.
Looks like you can use InputStream. See javadoc.
new PutObjectRequest(String bucketName,
String key,
InputStream input,
ObjectMetadata metadata)
I am working on a web mobile application. In my application I get a json string from a server, in my java script using:
var xhr = new XMLHttpRequest(url);
then I convert the json text to json object, save it in a local storage, then use it in different java scripts. In my project I am using only HTML and java script files and everything works fine for me. My only problem is how to secure the data I get so it wont be used by another, as the application will be available for people to use, and I am not allowed to make them access that json string I am getting from the server.
Thx for your time
A way you could get the server to verify your connection like that the server checks if you are logged in and if the requirement is met it sends you the data.
And also using SSL.
Avoiding caching the data.
http://www-archive.mozilla.org/projects/netlib/http/http-caching-faq.html
http://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Avoiding_caching