How to automatically generate pre-filled links of google forms - javascript

I hope you're doing well.
I want to share a google form with some friends to fill their personal preferences for a trip.
I have some people's information on excel but not all of them.
So I found that it is possible to send personalized pre-filled form URLs to your recipients, but it is very manual : https://support.yet-another-mail-merge.com/hc/en-us/articles/115004266085-Send-personalized-pre-filled-form-URLs-to-your-recipients
Do you know an easy way to automate the generation of pre-filled forms to my friends ?

Generate the manual URL once with obvious place holder data.
Then create an excel file with all the data you want to pre-fill.
Use an excel formula to create the URLs. Copy the column with the formulas and paste as values. The links are now unique to the data in their row.
Use an ampersands to concatenate the data, quotes “around text” and use the substitute function to encode special characters to HTML, Below the cell containing first entry is A2.
You can nest substitutions like so SUBSTITUTE(SUBSTITUTE(A2," ","%20"),"'","%27") to handle data with spaces and, in this case, apostrophes.
Click here to see how that would look in MS Excel

Related

How we can add numerical questions from html form to php database

I am creating one Exam system in which I want to add numerical questions from html form to mysql database , how can I achieve this , I tried with some editors but did not work
attaching sample image of questions
!Thanks
First, you will need to decide which mathematics markup language you want to use.
Some examples are AsciiMath and LaTeX. You could then store the equations/questions as strings in the chosen format in a varchar column in your MySQL database.
Then you would use a library like MathJax to typeset the questions for displaying to the users.

NodeJs create dynamic mySql table based on CSV input

I'm trying to build an application which does the following (simplified):
Allow the user to select a CSV file
Upload that CSV to NodeJS server
Parse the file and create array of rows (with headers)
Generate dynamic "Create Table" sql based on the column headers in the csv, but also detect the datatype (the column names, datatypes etc will be different every time)
Insert the csv data into the newly created table
Its step 4 I'm having trouble with. Is there a way to scan an array of data elements and determine what the datatype should be?
I've looked at Papa Parse and csv-parse but neither do what I need. Papa Parse comes close though, but it converts each array element separately and doesn't pick up dates.
Even if you run a full file scan, it will be difficult to guess the exact types.
Another problem is handling errors in input files, eg the number in the column, which should be stored a date.
Further: the insurance number (or account number) is a number, but in the database should be stored as a string.
I suggest you a method straight from Big Data Analysis.
Run the entire process in 3 stages: first create an intermediate table, where each column will be of type Text and import data into it using mysq: LOAD DATA INFILE ...
Conduct a preliminary analysis based on the user's previous choices, column names, content analysis and display for user a "wizard" of the table. (Or skip display wizard)
The analysis should include the calculation of the shortest, longest, average and most common lengths (eg first 100 rows contains long string who is error message: Some date for some proces isn't provided and other are valid date); variety of values (gender, country, other "dictionary" values); random content analysis (detection dates and numbers)
At the end you can use INSERT INTO ... SELECT, change column type (don't forget allow to NULL for convert error) or line by line convert and filtering operation.
//edit
Eh, I thought your files had a few GB. Loading large files in memory does not make sense.
Of course, you can use a library to read CSV and analyze it in memory instead of a temporary table in MySQL. But you will not avoid content analysis anyway. There is nothing to hide - automatic analysis without advanced AI systems works on average.
If you've found something that even detects data types a bit, you can build on it. Helpful also, I can be a tablesorter parsers.
if you still looking for answer, i would recommend npm csv parser packages such as const parse = require('csv-parse') , it is simply easy, first of all you have to get csv file data and parse it through csv parser function, then loop through your data and put it in an object to use it in sql query..

Fill pdf form created in scribus from client side javascript

I have a pdf form, into which I want to fill a password generated in javascript, so that the user can print it. The password is sensitive and may not be send to the server, so this has to happen in client side javascript. In this post it is possible using adobe acrobat.
The Idea is, that one creates a pre-filled form with a unique value, and than replaces that value using somple search and replace in javascript when generating the final pdf for displaying the user.
Since I do not own actobat, I thought I try it with scribus.
I generated a test form in scribus and gave it the prefilled value %HELLO%. But looking at the resuling pdf, I do not see that I can replace the %HELLO% value by the password with simple text replacement.
It turns out, while this post already gives the answer in the code it does not explain it.
The value of TextField has to be converted to a sequence of hex-encoded unicode characters (so each 4 digits) and it has to start with "fffe". Using this string, one can do the search and replace in the pdf document.
The code also updates the "xref" in the pdf, which one has to do when the length of the pdf changes (or some elements are positioned different in the file). Since I did not change the length of the value of the TextField, I did not have to do that.

Fillable PDF to HTML

Is there any way to create simple fillable embeded PDFs that allows me to extract the text via JS or ASP?
Now I know there are some libraries like iTextSharp, pdf2html etc. but I have found that these are just either overly complex or insufficient for my needs.
The scenario is this, I am trying to embed a tax document which the client may fill out, upon saving the document, the fields are then extracted into an object. As of now I have converted the PDF to SVG with inkscape but this still feels a bit bloated.
I just want to iterate through each field and store it accordingly.
Here's an example of one of the documents:
http://www.cra-arc.gc.ca/E/pbg/tf/t4/t4flat-fill-13b.pdf
One of the ways is to employ FDF or XFDF submits.
Basically, browser displays the PDF, user fills it and clicks a submit button. PDF viewer sends information about filled fields to specified URL.
You can choose format of the submit while creating the PDF.
Following is from the XML Forms Data Format Specification
FDF is a simplified version of PDF. PDF and FDF represent information
with a key/value pair, also referred to as an entry. This example
shows the T and V keys with values enclosed in parentheses:
/T(Street)/V(345 Park Ave.)
XFDF, on the other hand, represents an entry with an XML
element/content or attribute/value pair, as shown in the correspond
XFDF:
<field name="Street">
<value>345 Park Ave.</value>
</field>
Please make sure that not all PDF viewers might be able to submit forms data.

What is the best way to store a field that supports markdown in my database when I need to render both HTML and "simple text" views?

I have a database and I have a website front end. I have a field in my front end that is text now but I want it to support markdown. I am trying to figure out the right was to store in my database because I have various views that needs to be supported (PDF reports, web pages, excel files, etc)?
My concern is that since some of those views don't support HTML, I don't just want to have an HTML version of this field.
Should I store 2 copies (one text only and one HTML?), or should I store HTML and on the fly try to remove them HTML tags when I am rendering out to Excel for example?
I need to figure out correct format (or formats) to store in the database to be able to render both:
HTML, and
Regular text (with no markdown or HTML syntax)
Any suggestions would be appreciated as I don't want to go down the wrong path. My point is that I don't want to show any HTML tags or markdown syntax in my Excel output.
Decide like this:
Store the original data (text with markdown).
Generate the derived data (HTML and plaintext) on the fly.
Measure the performance:
If it's acceptable, you're done, woohoo!
If not, cache the derived data.
Caching can be done in many ways... you can generate the derived data immediately, and store it in the database, or you can initially store NULLs and do the generation lazily (when and if it's needed). You can even cache it outside the database.
But whatever you do, make sure the cache is never "stale" - i.e. when the original data changes, the derived data in the cache must be re-generated or at least marked as "dirty" somehow. One way to do that is via triggers.
You need to store your data in a canonical format. That is, in one true format within your database. It sounds like this format should be a text column that contains markdown. That answers the database-design part of your question.
Then, depending on what format you need to export, you should take the canonical format and convert it to the required output format. This might be just outputting the markdown text, or running it through some sort of parser to remove the markdown or convert it to HTML.
Most everyone seems to be saying to just store the data as HTML in the database and then process it to turn it into plain text. In my opinion there are some downsides to that:
You will likely need application code to strip the HTML and extract the plain text. Imagine if you did this in SQL Server. What if you want to write a stored procedure/query that has the plain text version? How do you extract plain text in SQL? It's possible with a function, but it's a lot of work.
Processing the HTML blob can be slow. I would imagine for small HTML blobs it will be very fast, but there is certainly more overhead than just reading a plain text field.
HTML parsers don't always work well/they can be complex. The idea is that your users can be very creative and insert blobs that won't work well with your parser. I know from experience that it's not always trivial to extract plain text from HTML well.
I would propose what most email providers do:
Store a rich text/HTML version and a plain text version. Two fields in the database.
As is the use case with email providers, the users might want those two fields to have different content.
You can write a UI function that lets the user enter in HTML and then transforms it via the application into a plain text version. This gives the user a nice starting point and they can massage/edit the plain text version before saving to the database.
Always store the source, in your case it is markdown.
Also store the formats that are frequently used.
Use on demand conversion/rendering for less frequent used formats.
Explanation:
Always have the source. You may need it for various purpose, e.g. the same input can be edited, audit trail, debugging etc etc.
No overhead for processor/ram if the same format is frequently requested, you are trading it with the disk storage which is cheap comparing to the formars.
Occasional overhead, see the #2
I would suggest to store it in the HTML format, since is the richest one in this case, and remove the tags when obtaining the data for other formats (such PDF, Latex or whatever). In the following question you'll find a way to remove tags easily.
Regular expression to remove HTML tags
From my point of view, storing data (original and downgraded) in two separate fields is a waste of space, but also an integrity problem, since one of the fields could be -in theory- modified without changing the second one.
Good luck!
I think that what I'd do - if storage is not an issue - would be store the canonical version, but automatically generate from it, in persisted, computed fields, whatever other versions one might need. You want the fields to be persisted because it's pointless doing the conversion every time you need the data. And you want them to be computed because you don't want them to get out of synch with the canonical version.
In essence this is using the database as a cache for the other versions, but a cache that guarantees you data integrity.

Categories