Bit of an odd question admittedly, hopefully someone will be able to help.
Background:
I am writing a small eCommerce website using Laravel, but due to restrictions of one of the product suppliers, I need to redirect the checkout of their products to their website which runs Magento.
Proposed Idea:
I want to be able to add the product to the cart on my website and then when they check out it redirects to their website and auto-fills their cart.
The only way I can think of to do this is to use Javascript to click the "add to cart" button on the supplier's corresponding page for each item. Obviously not ideal as it would have to launch each page which would seem troublesome with many items. Are there any other ways to accomplish this simply?
This will certainly not work as this sounds like a XSS Attack on your user. Your website can not execute any actions on another website in the background - at least as long there are no CORS headers to allow for that.
You need to find a way to submit the orders to your supplier via some kind of api. Please do not try to go your proposed way, this is a really bad idea.
Related
I'm a beginner at coding, I know javascript but not super advanced objectd,
I'd like to know how to change html content with its URL. For example,I am on a website like GMAIL, it has different page of registring and logging in. These two pages have different URLs.
What I'd like to know is how do they change the URL along with HTML when I click on the button "Log in". Is this possible through server-side like node.js and express, or just with front-end javascript?
One last thing, do websites have multiple web pages or it's just in one single HTML file?
Well, I have set up a practice project, but I don't know what I am doing.
I changed HTML content with jQuery library but I don't know how to change URL.
First I made a homepage with some text and two links to two forms.
I showed registration form when click on "Sign in", and log in form on "Log in", and hid the homepage with the show() and hide(). The URL doesn't change in order to work with it with express. I tried it with history.pushState() but it messed up things: I can't return to homepage, and it didn't change the URL i wanted based on the form. So i deleted it, and I am stuck and don't know I could find some tutorials online.
My code doesn't contain anything other than what I described.
So, please can you explain to me how websites do that.
And one other thing, my express server now is very slow, it takes nearly 5min to start. I don't know if it's because my pc which is old and not super good unfortunately.
Can you please advice me with some tutorials and tips?
I agree that your question is too broad. Even there is many years invested in unversity to know these stuff well, I believe in self learning, so I will give you some light for your next steps in this world.
Here are some questions you may ask Google or research where ever you want:
There's both applications that hosts entire html documents in a server and reacts to http requirements responding with different ones. These are the first ones in existence.
Today the trend is to host information on distributed servers (Even cloud) as services to interact with just as information repositories, and entire client side applications that handles that information to show to the user in a more interaction friendly way.
So here are 4 first questions you can ask:
How does HTTP protocol works (with html documents e.g.)?
What's the difference between thin client and fat client applications?
What are web services?
How can I do a simple client side application with different routes using a public web service?
There is a lot of information to read about, and that's not the way I learned in university, so I can not tell you that's the right way or even a good one. Anyway, you should consider taking a web programmer beginner course, if you already know about basic algorithmic composition.
Wish you the best in this extensive path...
Good day to you,
I've started my journey with code few weeks ago, currently mastering CSS and slowly proceeding to JS. I understand that the most efficient way to learn is actually through trying to develop your own products.
So I'm going to create website where you can find variety of information about very specific events around the world. Not going into unnecessary details... I need to know what are next skills to acquire to be able to launch such project. The design is such it should be simple landing page with lots of filters such as ticket price, country location, chosen date (based on information in the code or/and connected directly to Facebook events). The user can also choose to geolocate events via interactive map where all options are visible. The purpose is to show the results to user after filtering the rest of the data out. After this user can click on events that match his requirements and proceed to subpages to gather more specific information.
How to manage this? Could this be deployed on vanilla JS/jQuery or I will need to also learn some PHP and create some kind of database? Maybe needed data about the events can be put directly in the lines of code using pure CSS/JS? Also, do you think that launching this one on Wordpress could be helpful taking in mind some helpful plugins or should I build everything from scratch?
Sorry if this question sounds very noobie in style but I'm completely fresh in this and very eager to start project from which I can learn. At the same time need some support about what next language is most critical I need to know to make this happen.
Thanks
I have to create a web page, where all activity of some user will be seen by second user.
Simply, the second user see everything what the first user is doing at his page.
I know, that It is possible using AJAX or Websockets, but I' m interested in simpler solution.
Do you know any simple solution, which enables to intercept frame of another user?
I had browsed many pages, but I haven't found satisfactory solution.
I assume that I control the source code of this application.
I greatly appreciate your help.
Check out Mozillas TogetherJS.
It is a solution for users interacting with a page together in realtime; it might do what you are trying to achieve.
I am exploring options to enable a checkout on a website for a friend. The requirements for the checkout are seemingly simple. The product in question can be viewed here:
http://www.cedartimemachines.com/CCB.html
I have worked with PayPal and Google checkout to accomplish this, however the buy buttons only allow for 1 drop down menu with prices per product. What I need is to have the all the features elected by the customer to add up and proceed to the checkout with that price.
I have a thorough knowledge of HTML, CSS and Javascript, but I do not have a clue when it comes to server side stuff or eCommerce. Any help would GREATLY appreciated.
You might look at https://stripe.com/ - their API will let you set up your form however you'd like, and then have them do all the heavy lifting for the billing. You'll get a confirmation token back once the transaction completes, and can use that to deliver the product.
Check out the documentation to see if it'll work for you.
I want to keep bots from following my external links through rel=nofollow.
I have 2 questions about it:
1) Does this really help my page ranking (I heard a SEO guy saying this, as it the page ranking should go up as the probability is lower that the user leaves the page)
2) Does it work when the rel=nofollow is set through javascript in the $(document).ready() function?
EDIT: thanks for the suggestions so far - to go more into detail to 1:
how can the robot know(...)?
The robot knows this because he knows the page ranking of the page that you link to, and if it is high the probability is high that you follow this link and so by leave my page. That's why it is supposed to be good if you have more incoming than outgoing links, where of course incoming links from high-ranked pages count more than incoming links from low-ranked websites. on the other hand outgoing links to high-ranked pages are supposed to increase the probability that the user leaves... but I am no expert in this that's just what this SEO guy was telling
EDIT 2
Question is if it improves my Google pageranking if I put rel="nofollow" to external links, and - in case it improves my page ranking - if this still works through setting it with javascript.
Thanks in advance
1.
It's possible. Your pages will flow pagerank internally, so having more outbound links will decrease the pagerank you flow to your own pages.
2.
Google is capable of reading javascript, and will honor a nofollow on dynamically created links, however, I am not sure if it works when dynamically adding nofollow on 'static' links.
Of course, there's much speculation when it comes to SEO.
I doubt
No, it doesn't work. Bots generally don't execute JavaScript code.
What?
the page ranking should go up as the probability is lower that the user leaves the page
How should a robot know this?
Robots don't process JavaScript, rel="nofollow" has to be present in the source markup as it is sent to the client.
And to add: rel="nofollow" does not guarantee that a link is not followed or added as link to the other page to build up page rank (the real process is much more complex); that depends on the robot/search engine.
Adding a rel="nofollow" will not stop the bot following the link. but it will stop the bot giving any of your page rank to that link.
Oh and as said before mostly bots do not execute JavaScript. I belive google have been playing around with one that dose, but this is the exception not the norm.
1) The more pages that you link out to, the more it affects your authority ratio, you essentially want more linking in that you link out. CTR is tracked by google analytics and this is factored into their essentially blackbox search ranking magic.
2) Whilst it's commonly thought that robots don't process JavaScript, this is wrong, google's current generation of robots are ajax aware.
I came here looking for an answer to this question myself. (Thanks Andre!)
I can attest to Google following links with href="javascript:..." URLs, and going to the correct pages, so that is no defense against unwanted link-crawling. I have also seen search result snippets include text inserted by javascript, so there is ample evidence of Google processing javascript.
If the links are internal, proper use of robots.txt would be the preferred, easier, and more bandwidth-efficient answer, of course, if you have access to that. (We don't on the server in question, thus my own search for answers.)
I shall be adding nofollow via javascript.