I have learned about Tin Can API and it said it can track almost all learning activities including reading a PDF file, so does anybody know how to do that?
Is there a way to insert some code (maybe JavaScript or C#) which then send xAPI Statement to LRS?
On one hand you need an LRS server to keep a repository of actions performed by the user (yours or some other institution's). On the other you need to serve the pdf with something that'll handle the xAPI protocol. I know links are not cool in general :-), but this is the one you need https://xapi.com/ You're basically zipping some json files with your pdf/content and that makes an xAPI-enabled package, which you can then serve through an LMS or just distribute.
Related
I am using react-markdown (escaped) to load markdown from a JSON file.
I utilize a static site delivers on CloudFront to cut cost and remove the need of operations costs on servers.
Currently all posts get compiled into a posts.json file that get read by react-markdown in a react-based static site.
Beyond maybe chunking this to into multiple files to prevent a huge json file needing downloading, is there any kind of issue from this method?
Is this a bad idea?
EDIT; react-snap is being used to "prebuild" or whatever the term may be. I however am unsure if this is doing anything in regards the json that gets loaded on the page, for example if its getting output in build as plain HTML. Will have to confirm.
Your approach does take on some overhead and latency since there are several dependencies that must be satisfied before your content reaches the user.
The browser must load and parse the script
The script must fetch the JSON from the server
react-markdown has to parse the markdown strings
React has to render the components
None of these are likely to be particularly slow, but we can't do any of it concurrently, and it adds up.
But since your markdown is static and doesn't require re-rendering, you can get some efficiency from moving the render to the server, possibly even to the build step itself. If the markdown string is compiled to HTML via react-markdown at build time, then the client receives the markup that much more quickly.
Frameworks like Next.js do this by design - it allows you to specify async functions that fetch the data needed to render the page at build time. The data can of course be anything that is representable as React props, including JSON.
It may be neither your reponsibility nor your preference to change your project to use a new framework, but I hope the general ideas are useful on their own.
I think server-side rendering will help in your case as most of the resources need to be compiled on the client machine. you can also use a chrome puppeteer that is headless chrome that can be used to transpile resources on the server and then send it client to reduce the latency. Refer https://developers.google.com/web/tools/puppeteer
It looks like you have everything you need to use a static site generator like Gatsby. A static site generator will allow you to keep your big JSON file which will only be read on build time. A static site generator like GatsBy will help you generate stand alone static HTML documents for each of your blog post.
You also can host your static site for free on any the popular CDNs free tiers like Nelify and Surge
We have set up a bokeh server in our institute, which works properly. We also have a python-based code to analyse fMRI data which at the moment uses matplotlib to plot and save. But I want to transfer the code to bokeh server and allow everybody to upload files into the server from the client and when the analysis is done in the server, save the output plots in their local HDD. This transfer file procedure seems to be lacking in bokeh atm. I saw a new feature recently added in github to upload json files, but my problem is fMRI files come in various formats, and asking (not necessarily tech-savvy) users to convert the files into a certain format beats the purpose. Also, I do not know any JS or the like, hence I do not know what solutions people usually use for such web-based applications.
If anybody has any solutions to get around this issue, it'd be happy to hear it. Even if it is a solution independent of bokeh (which would mean users need to open a separate page to upload the files, a page to run the analysis, and a page to save the output) please let me know. It won't be ideal, but at least better than no solution, which is the case in bokeh right now. Thanks!
I'm not sure where you are getting your information. The FileInput widget added in Bokeh 1.3.0 can upload any file the user chooses, not just JSON.
I have written a script in PHP that connects to my FTP server and downloads the latest backup of all the websites I have created.
I have written this script in PHP (using ftp_get and all those lovely functions!), but I have also created a nice looking dashboard in HTML, which uses AJAX to not only execute the PHP code but to output all the "echo"'s into a textarea so I know which ones were successfully downloaded and which ones failed.
My problem is that I am not the only employee, there are quite a few of us who could execute this script, so on the dashboard I want there to be a way of choosing where these backsup are saved without having to edit a hardcoded destination in the PHP. At the moment, it's hardcoded to save to my desktop, but if someone else were to use it on their computer, it won't work.
I have tried searching for this, but no one has asked exactly the question I needed answering.
If the answers could be using HTML, JavaScript/jQuery or PHP, that would be handy.
Thank you in advance.
------------EDIT------------
Not sure people are understanding my problem, so I will try and explain it differently :)
I have a backup tool that will connect to an FTP server, go through each project folder and download the latest backup of that site. In total, there are at least 20 files to be downloaded.
Multiple people will be using this tool, depending who is around to backup the files.
Bob might want to save it directly to his desktop: C:/Users/Bob/Desktop.
Barbara might want to save it in C:/Users/Barbara/Backup Folder/2017/
Jimmy wants to save it in C:/Users/Jimmy/Projects/Project_name/Backup-01-01-2017
And I might want to save it onto an external HDD.
I want the tool to have a function that will allow the user to specify a location on their HDD where these files will all be saved.
I hope this is clearer.
Many thanks.
I am trying to write some HTML/JS code which will facilitate uploading large files (multi-GB) to a remote server. Previously we had been using a flash uploader which uploaded a given file in a single network request. The flash would open a network connection, read a chunk of a file into memory then write that chunk to the network connection then grab the next chunk then write to the network etc. etc. until the entire file is uploaded. It was done this way because most web browsers will attempt to read an entire file into memory before attempting to upload. When dealing with multi-GB files, this essentially crashes the client system because it uses all of the client memory. Now we are having issues with using flash, so it needs to go, we want to replace it without needing to modify the existing server-side code.
A few google searches for jquery uploaders reveals that there are plenty of libraries which support "chunking" but they "chunk" over multiple requests. We do not want to chunk a file over multiple network requests, we merely want the JS to read the file in chunks as it writes the file to a single network connection.
Anybody know a library which can do this out of the box?
We are not opposed to modifying an existing library if need be. Anyone have a snippet that resembles the bellow pseudo-code that I may be able to retrofit into a library?
connection = fopen(...);
fputs("123", connection);
... some unrelated code ...
fputs("456", connection);
fclose(connection);
(excuse my use of C functions in pseudo JS code ... I know that is not how you do it in JS, I am merely demonstrating at a low-level the flow for how I want to write to the network connection before closing it)
NOTE: We are not trying to "modernize" or improve this project extensively -- we are not trying to re-do this project. We have some old code that has sat here for years and we want to make as few changes to the server-side code as possible. I have more important projects to modernize and make more efficient -- this one we just need to work. Please don't advise me to impliment "proper" file chunking on the server side -- that was my suggestion, and if my suggestion were taken then that task would have been assigned to a different developer. Out of my control now, this is a client-side-only fix please!
Thanks, sorry for any headache!
You could try binaryjs. I haven't looked into the internals but I know it supports manually setting the chunk size. Maybe you can even set it to Infinity.
Specifically you could try:
var client = new BinaryClient('example.com', { chunkSize: Number.POSITIVE_INFINITY });
client.send('data...');
Note: binaryjs is a NodeJS server library, and a browser-compatible client library.
Is there a way to have a blog directly integrated into my HTML/javascript-only website, without having to have something like a SQL-database and a dynamic engine like PHP or MySQL?
Maybe there is some service in the web that offers this (hopefully without ads :) ). Or maybe I can have a blog engine entirely written in javasript?
Entirely written in JavaScript? Surely that defeats the entire point of having a "blog-engine" in the first place? The point being that the data is stored somewhere and dynamically retrieved. To avoid using anything server-side (which seems to be your intent), and only use HTML/JavaScript, you'd have to store all the data for the blog in files that are served up to each visitor, and then retrieve the data from the particular, local, locations using JavaScript.
Sorry if I'm misunderstanding the point here... but this seems to be an utterly useless way of trying to go about things. Blogs are, in general, either written statically (in HTML [even though this is rare]), or are dynamically generated from a database by a server-side scripting language (most common).
Edit: As an additional point, I suppose you could include some third-party blog feed, or service, in your page, via use of JavaScript... but I'm unsure as to which (if any) blogging services would directly support this method of working. Additionally, this is quite an unreliable way of including third-party data in a page...
Here's a thought. It's not really a blog engine - but a wiki.
Entirely javascript/html/css. All lives in a single html file:
http://www.tiddlywiki.com/
not sure how it would work on a real live site, but their site is using it:
* A personal notebook
* A GTD ("Getting Things Done") productivity tool
* A collaboration tool
* For building websites (this site is a TiddlyWiki file!)
* For rapid prototyping
* ...and much more!
You could use github pages. You will get a generated blog with version control.
Other option is to use a Desktop blog tool and then update your site.
You can user iWeb if you have a Mac or CityDesk on Windows or you may try this open source tool
Edit Today I came across this tool: Zeta producer that may help.
http://code.google.com/p/showdown-blog/
Blog engine written in just JS and XML [v0.6] {JavaScript, XML}
So, what you want is to have a blog where you're website provider doesn't provide a way to serve dynamic content?
The only way I see that you can do it in that case is writing html-files (or text-files if you prefer) and adding them to the site. After that you can have some JavaScript to add them to your "blog-page".
You of course need to upload them to the website in the same way as you do for the other files, and then have a way for the JavaScript to know which pages it should fetch.
I am not aware of any JavaScript blog-engines, but you can have a look at the templating functions in for instance Prototype
Of course, that means that you will have to fetch both the template and the content through Ajax and let the client do all the processing (could be slow and possibly insecure), and you still need to have a place to upload the content and update it.
Your best bet is going to be using a generator to create the HTML/CSS/JS to upload to your server, take a look at Webby: http://webby.rubyforge.org/
IF you really need to you can use a public api for a service that lets you post small bits of info and retrieve it using javascript.
for example if you only need small posts you can make a blog in html.javascript that utilizes twitter as the engine. of course you will be limited to 140 chars. I am sure there are other services that will allow a similar idea but with less restrictions.
And of course the best option - Get a blog software or host your blog with a service provider and link to it from you site.
Good luck
One solution would be to use some application that generates the static web pages of your blog, and uploads them to your web server. This way you'd have a blog with static content that could all be managed in javascript alongside your existing site, without needing to install database, daemon software, or additional dynamic web programming languages on your server. The static content generation could happen directly on your server if possible, or you could run the html generation tool locally and upload the output.
MoveableType has a tool like this. You still need somewhere to store the content of your blog, and for this MoveableType uses MySQL by default, so you'd still need to install a database somewhere, but the database could simply be one your local desktop.
MoveableType also has support via plugins or older versions that can retrieve data from a sqlite or other database. The advantage of sqlite is that it doesn't require installing daemons like MySQL does, you can just put a sqlite file on disk somewhere, give MoveableType the path to the file, and run the script to generate your static content.
There are likely other tools like MoveableType, and I have in the past generated blog-like web pages simply by writing small scripts to generate HTML. The main issue is just that you need somewhere for these scripts to fetch data from.
Another option might be to develop your blog using XSLT, ... with XSLT, you'd put the content of your pages in XML files, and then write a template in XSL that converts your XML to HTML.
If you google for 'static blog site generation' you might find other ideas/options, including Jekyll/github mentioned in one of the other responses.