I'm developing a site in which the user can create his set of questions and asnwers. I'll have some datas, for example:
Question1: "What's your name?";
Right answer: "Emiliano";
Wrong answers: ["Luke", "Mathew"]
Question2: "What's your favourite color?";
Right answer: "Green";
Wrong answers: ["Blue", "Black"]ù
...and so on.
and I want to give to some users the chance to create their own test.
My solution has to be focus on the performance (in my last quiz-site I had 10k users online at the same time).
How can I do?
I've thought I can generate static html pages from my data (this is how we've done since now, but since now our quizzes were generate by us and not by users, so we generated like 2 quizzes per day and in these two times we were able to use some expensive-performance tool like php).
I've read I can use node.js to generate my pages server-side, but there's some way to generate them client-side? I don't want to use my server as long as I can.
I need to generate page like
www.mysite.com/user-name/1.html
www.mysite.com/user-name/2.html
www.mysite.com/user-name/3.html
www.mysite.com/user-name/result.html
In which 1.html contain Question1 data and so on.
Also, let me know if I can do this in another "simple" way.
Thank you and sorry If I've written some meaningless things (in case, let me know what mistake I've made), first week on Javascript (and coding).
Related
I'd to start a React/Next/Vue blog, doesn't matter for real, the only thing is important, that's it will be based on JavaScript framework, about cats
For example I have a DB with cats:
ID Name (<= unique)
1 Kitty
2 Catty
N .....
So I want to write in markdown some cool facts about my cats, like a blog article:
fact_one.md
My cats are #Kitty and #Catty and they are awesome!
And then post it. Later, imagine that every cat has a personal page, and when it visits the page, it sees (doesn't matter how exactly) that this kitten has been mentioned in fact #1.
The point:
I am not asking, how exactly browser notifications works, event listeners and so. I am asking about:
For what exactly should I google/search/look for parsing user mentions in text via symbols like # (any Regexp pattern)?
Using markdown in such case appropriate on not? Maybe there is better way?
What npm module / maybe a blog article would you recommend for it?
For example I have take a look at marked.js and GFM but I am still not sure.
I mean, that if my blog become popular, it will a problem if every my article is stored as a raw *.md file, I guess I should store it in database. What's the best way to do it? I mean storing text as a field in markdown/html syntax, or store raw *.md as binary? But if I store my text in DB, how to control mentions? Or there is another way?
So every useful and helpful information will be appreciated and rewarded will up-voting.
You can use regex like \s#(\w+) to match and capture the names.
At the time of saving a new post, or updating a post, in the database, you can extract the mentions and update them to a relationship between personal pages and posts with mentions.
I am an amateur programmer and I want to make a mulitlingual website. My questions are: (For my purposes let the English website be website nr 1 and the Polish nr 2)
Should it be en.example.com and pl.example.com or maybe example.com/en and example.com/pl?
Should I make the full website in one language and then translate it?
How to translate it? Using XML or what? Should website 1 and website 2 be different html files or is there a way to translate a html file and then show the translation using XML or something?
If You need any code or something tell me. Thank You in advance :)
1) I don't think it makes much difference. The most important thing is to ensure that Google can crawl both languages, so don't rely on JavaScript to switch between languages, have everything done server side so both languages can be crawled and ranked in Google.
2) You can do one translation then the other, you just have to ensure that the layout expands to accommodate more/less text without breaking it. Maybe use lorem ipsum whilst designing.
3) I would put the translations into a database and then call that particular translation depending on whether it is EN or PL in the domain name. Ensure that the webpage and database are UTF-8 encoding otherwise you will find that you get 'funny' characters being displayed.
My Advice is that you start to use any Framework.
For instance if you use CakePHP then you have to write
__('My name is')
and in translate file
msgid "My name is"
msgstr "Nazywam się"
Then you can easy translate to any other language and its pretty easy to implement.
Also if you do not want to use Framework you can check this link to see example how it works:
http://tympanus.net/codrops/2009/12/30/easy-php-site-translation/
While this question probably is not a good SO question due to its broad nature. It might be relevant to many users.
My approach would be templates.
Your suggestion of having two html files is bad for the obvious reason of duplication- say you need to change something in your site. You would always need to change two html files- bad.
Having one html file and then parsing it and translating it sounds like a massive headache.
Some templating framework could help you massively. I have been using Smarty, but that's a personal choice and there are many options here.
The idea is you make a template file for your html and instead of actual content you use labels. Then in your php code you include the correct language depending on cookies, user settings or session data.
Storing labels is another issue here. Storing them in a database is a good option, however, remember you do not wish to make 100's of queries against a database for fetching each label. What you can do is store them in a database and then have it generate a language file- an array of labels->translations for faster access and regenerate these files whenever you add/update labels.
Or you can skip the database altogether and just store them in files, however, as these grow they might not be as easy to maintain.
I think the easiest mistake for an "amateur programmer" to make in this area is to allow the two (or more) language versions of the site to diverge. (I've seen so-called professionals make that mistake too...) You need to design it so everything that's common between the two versions is shared, so that when the time comes to make changes, you only need to make the changes in one place. The way you do this will depend on your choice of tools, and I'm not going to start advising on that, because it depends on many factors, for example the extent to which your content is database-oriented.
Closely related to this are questions of who is doing the translation, how the technical developers and the translators work together, and how to keep track of which content needs to be re-translated when it changes. These are essentially process questions rather than coding questions, so not a good fit for SO.
Don't expect that a site for one language can be translated without any technical impact; you will find you have made all sorts of assumptions about the length of strings, the order of fields, about character coding and fonts, and about cultural things like postcodes, that turn out to be invalid when you try to convert the site to a different language.
You could make 2 different language files and use php constants to define the text you want to translate for example:
lang_pl.php:
define("_TEST", "polish translation");
lang_en.php:
define("_TEST", "English translation");
now you could make a choice for the user between polish or english translation and based on that you can include the language file.
So if you would have a text field you put its value to _TEST (inbetween php brackets).
and it would show the translation of the chosen option.
The place i worked was doing it like this: they didn't have to much writing on their site, so they were keeping it in a database. As your tags have a php , I assume you know how to use databases. They had a mysql table called languages with language id(in your case 1 for en and 2 for pl) and texts in columns. So the column names were main heading, intro_text, about_us... when the user comes it selects the language and a php request get the page with that language. This is easy because your content is static and can be translated before site gets online, if your content is dynamic(users add content) you may need to use a translation machine because you cannot expect your users to write their entry in all languages.
I’m trying to show a list of lunch venues around the office with their today’s menus. But the problem is the websites that offer the lunch menus, don’t always offer the same kind of content.
For instance, some of the websites offer a nice JSON output. Look at this one, it offers the English/Finnish course names separately and everything I need is available. There are couple of others like this.
But others, don’t always have a nice output. Like this one. The content is laid out in plain HTML and English and Finnish food names are not exactly ordered. Also food properties like (L, VL, VS, G, etc) are just normal text like the food name.
What, in your opinion, is the best way to scrape all these available data in different formats and turn them into usable data? I tried to make a scraper with Node.js (& phantomjs, etc) but it only works with one website, and it’s not that accurate in case of the food names.
Thanks in advance.
You may use something like kimonolabs.com, they are much easier to use and they give you APIs to update your side.
Remember that they are best for tabular data contents.
There my be simple algorithmic solutions to the problem, If there is a list of all available food names this can be really helpful, you find the occurrence of a food name inside a document (for today).
If there is not any food list, You may use TF/IDF. TF/IDF allows to calculate the score of a word inside a document among the current document and also other documents. But this solution needs enough data to work.
I think the best solution is some thing like this:
Creating a list of all available websites that should be scrapped.
Writing driver classes for each website data.
Each driver has the duty of creating the general domain entity from its standard document.
If you can use PHP, Simple HTML Dom Parser along with Guzzle would be a great choice. These two will provide a jQuery like path finder and a nice wrapper arround HTTP.
You are touching really difficult problem. Unfortunately there are no easy solutions.
Actually there are two different parts to solve:
data scraping from different sources
data integration
Let's start with first problem - data scraping from different sources. In my projects I usually process data in several steps. I have dedicated scrapers for all specific sites I want, and process them in the following order:
fetch raw page (unstructured data)
extract data from page (unstructured data)
extract, convert and map data into page-specific model (fully structured data)
map data from fully structured model to common/normalized model
Steps 1-2 are scraping oriented and steps 3-4 are strictly data-extraction / data-integration oriented.
While you can easily implement steps 1-2 relatively easy using your own webscrapers or by utilizing existing web services - data integration is the most difficult part in your case. You will probably require some machine-learning techniques (shallow, domain specific Natural Language Processing) along with custom heuristics.
In case of such a messy input like this one I would process lines separately and use some dictionary to get rid Finnish/English words and analyse what has left. But in this case it will never be 100% accurate due to possibility of human-input errors.
I am also worried that you stack is not very well suited to do such tasks. For such processing I am utilizing Java/Groovy along with integration frameworks (Mule ESB / Spring Integration) in order to coordinate data processing.
In summary: it is really difficult and complex problem. I would rather assume less input data coverage than aiming to be 100% accurate (unless it is really worth it).
I am creating a list and each item contains 3 properties: Caption, link, and image name. The list will be a "Trending now" list, where a related picture about the article is shown, a caption to the article, and a link to that article. Would it be best to store this information in an array or a database? With an array I could update the list, by adding 3 new properties, and removing the last 3. With a database, I could make a form where I submit the 3 properties and it'll update on its own without me touching the code. Would it be better to make this system in a Javascript array, or database? Wouldn't it be better to make it into an array for faster speeds? The list will have 10 items, each item has 3 properties.
For a list of 10 items, you can definitively go with a simple Array. If you need to store a bigger amount of data, than try localStorage.
Whichever solution you use, keep in mind that it will always be processed and stored in the browser.
Razor - your question touches many principles of programming. Being a beginner myself, I remember having had exactly those questions not too long ago.
That is why I answer in that 'beginner's' spirit:
if this were to be a web application then your 'trending now list' might be written as a ul list with li items in a section in an index.html in html code with css.style to style the list and your page.
Then you might use javascript, jQuery, d3.js etc. or other languages such as php to access and do something with data from those html elements.
pseudo-code example to get a value from an element:
var collectedValue = $("#your_element_id).value;
To get values into an array you would loop over your item.values pseudo-code:
for (all item.value collectable){
my_array.push(item.values);
}
Next you would have to decide how to store and retrieve values and or arrays.
This can be done client side, more or less meaning: it stays with the browser you are working in. Or server side, meaning more or less your browser interacts with a server on an 'other' computer to save and retrieve data.
Speed is an issue when you have huge data sets, otherwise it is not really an issue; certainly not for a short list. In my thinking speed is a question for commercial applications.
In case you use a database, then you have to learn the database language and manage the database, access to it, and so on. It is not overly complex in mysql and php ( that is how I started ) but you would have to learn it.
In html/css/javascript solutions others have pointed out 'JSON', which is often used for purposes such as yours.
'localStorage' is very straight forward, but has its limitations. Both these are easily understandable.
To start, I myself worked through simple database examples about mysql/php
Then I improved my html/css experience. Then I started gaining experience with javascript.
Best would be if you would enable yourself to answer your questions yourself:
learn principles of html/css to present your trending now list
learn principles of javascript to manipulate your elements
Set up a server on your computer to learn how to interact with server side ( with MAMP or such packages)
learn principles of mysql/php
learn about storage options client or server side
It is fun, takes a while, and your preferences and programming solutions will depend on your abilities in the various languages.
I hope this answer is not perceived as being too simplistic or condescending; your question seemed to imply that such 'beginner's talk' might be helpful to you.
I've created a subscription-based system that deals with a large data-set. In its first iteration, it had semi-complicated joins that would execute, based on user-set filters, on every 'data view' page. Each query would fetch anywhere from a few kilobytes to several megabytes depending on the filter range. I decided this was unacceptable and so learned about APC (I had heard about its data-store features).
I moved all of the strings out of the queries into an APC preload routine that fires upon first login. In the same routine, I am running the "full set" join query to get all of the possible IDs for the data set into a $_SESSION variable. The entire set is anywhere from 100-800Kb, depending on what data the customer is subscribed to.
I convert this set into a JSON array and shuffle the data around dynamically when the user changes the filters. In creating the system I wanted it to seem as if the user was moving around lots of data very quickly, with minimal page loading (AJAX + APC when string representations are needed), as they played with the filters.
My multipart question is, is it possible for the user to effectively "cancel" the initial cache/query routine by surfing to another page after the first login? If so, can I move this process to an AJAX page for preloading, or does this carry the same problem? Or, am I just going about all of this in the wrong way? I came up with the idea on my own and I'm worried that I've created an unusable monster.
Also, I've been warned that my questions suck and I'm in danger of being banned. Every question I've asked has come from a position of intelligent wonder, written as well as I knew how at the time, and so it's really aggravating when an outsider votes me down without intelligent criticism. Just tell me what I did wrong and I will quickly fix the problem. Bichis.