Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
let's say i have a website example.com and a page example.com/DHAS5KJ1H45GAS.html. There is no links to said page anywhere except mysql database.
I'm assuming there is a setting to make search engines ignore the page (noindex, nofollow).
So my question: is there a program that can find all website's pages?
PS: I'm trying to make a page accessible only for users that know the passcode (without registration), i have mysql database with code/link pairs.
Maybe there is a better way to do this?
If you give a file a random name and don't have it linked anywhere publicly (and don't have indexing enabled in your web server), there is theoretically no way that it would be found by anyone without the link.
Do keep in mind, however, that anything you put in the URL will get stored to the user's browser history (someone who REALLY wanted to invade your website might use bruteforce CSS history knocking to exploit this if your codes weren't sufficiently random), and that it'd be pretty easy for anyone who had access to share the URL.
Create a robots.txt file in the root of your web directory, and set the content to:
User-agent: *
Disallow: /
This will prevent the site from being indexed by search engines. You can make the settings as specific as you'd like to allow/disallow pages more specifically, if need be.
If this file has constant name you can apply mod_rewrite. Simply create the file .htaccess in the folder where the file is located and put following code:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule ^/?DHAS5KJ1H45GAS\.html$ - [F,L]
</IfModule>
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have been working with a knowledge graph where each unique subject has a number of properties. Each subject has a unique URI ex lets take this website: https://www.deutsche-digitale-bibliothek.de/person/gnd/118575775?lang=en
for a particular person
I know it is a stupid question but I am new to html and javascript. So if I have a link to that URI do I have to create an html file for each subject? What do /person/gnd/... refer to exactly?
Essentially, every slash after the domain is considered a route or pattern. The way that route works depends on the configuration of the web server.
Host-Based
Host based routing relies on the web server to route/forward the traffic, and in most cases it is in fact using a physical directory on the filesystem. This is the traditional way web applications worked.
Path-Based
Path based routing uses what is commonly referred to as a "front controller" to route the request. In this scenario, the web server routes/forwards all traffic to one file (commonly index.xxx). Inside that file you have an application router that matches patterns based on the URI. If it finds a match, that particular code will run and the result is returned.
Here is a link that also explains it: https://dzone.com/articles/the-three-http-routing-patterns-you-should-know
If I have a link to that URI do I have to create an html file for each subject?
Thank god no! An URL like http://example.com/some/path is basically a protocol (http://) + a server name (example.com) + a path (/some/path). The old-school way was to indeed have a resource for each path, but that is not required: given a certain path, the server is free to respond with whatever it wants.
I can't say that for sure but I highly suspect deutsche-digitale-bibliothek.de to have preformatted page for all people, and then custom person infos is being loaded thereafter. The bottom line is that it is both unecessary and not advised to, in the case when a lot of similar pages exist, have one HTML file for each of them.
What do /person/gnd/... refer to exactly?
That is the path. That's the segment that is sent to the server as part of the URL as specified in the HTTP protocol.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I need to make landing page on 2 languages, swapping language by click on the flag.
How I understand, I need to save language state in user's localstorage / cookies.
I see here 3 options:
Upload on user machine .json with content on 2 languages. At the opening of page script will check localstorage for availability of lang variable and after it fill page by JS.
Save lang variable in cookies, and at appeal to the server, server must make magic and send back page built on user language.
Make duplicates of each page in each language.
Question 1:
Which way better, when and why? (cases: one page landing, site with many pages)
Question 2:
When I make duplicates of page, is it normal to have no index.html in root directory at all, but:
/en/index.html
/ru/index.html
and .htaccess file that redirect to /en/ by default, check cookies for lang variable, and if exist redirect user to specific folder.
Question 1:
Option 1:
Good practice if you have some sort of frontend framework (like Angular or React).
But if you have a static (or only serverside rendering) page it is not really the way to go. Because it add unnecessary complexity to your code.
Option 2:
Good way to do it if you only have backend rendering (for example with PHP).
Option 3:
Do this only if you have static webpage (only HTML/CSS) which does not change. And you don't have programming experience at all.
Question 2:
Well no you always have a index.html but you can redirect to the index.html of the default language.
I would store each one of translations in a json file (en.json, ru.json) and change languages using url parameters. index.html?lang=ru (This way you don't have to handle different paths)
If the url parameter exists, save it on localstorage, load the respective json and fill the page with the tanslated strings.
If there is no url parameter, look in the localstorage for a language. If there is nothing there, assume a default (en, for example) and load the translated strings.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
On this site, which is on Joomla and using a plugin called Matukio, the links on the left used to work, but no longer do and we are desperate to figure out why.
The company who makes the plugin replied with this info, but it sounds like he's just spouting a bunch of stuff that isn't really the cause (JUri::root() is working fine, the VWO stuff was there before when the links worked, etc).
I doubt anyone can give much insight based on this limited info, but just taking a shot bc stackoverflow is literally the best site on the internet for help. If anyone has any ideas on things i should look at, test, please share...
FROM MATUKIO:
you have JavaScript errors on your page (not caused by Matukio):
07:43:49.475 ReferenceError: Heatmap is not defined
<anonym>events:129
1events:129:3
ga('set', 'VWO', Heatmap);
Additionally the following is failing:
07:44:55.490 Loading of mixed contents "http://www.workwave.com/index.php?option=com_matukio&view=requests&format=raw&task=route_link&link=index.php%3FItemid%3D283%26option%3Dcom_matukio%26view%3Deventlist%26art%3D0%26catids%3D0%26search%3D%26limit%3D10%26dateid%3D2%26fees%3D0%26locations%3D0%26organizers%3D0%26ordering%3D1%26start%3D0&Itemid=283" was blocked .1jquery.min.js:5:25679
As you see the link looks right, but hte protocol is wrong. E.g. http
instead of https.. Are you using any plugin for https redirection? Or
htaccess? It seems that JUri::root() is not working correct on your
instance. Joomla has a setting for https in the global config.
Kind regards,
Yves
It's hard to tell without more information but it looks like your site is using HTTPS. This is a good thing. But one of the restrictions is that if you're using a secure connection, then you can't access resources that are stored on a server using the non-secure HTTP protocol (it's kind of buying a fancy lock for your front door and then leaving the window open.)
This is likely coming from the Heatmap library. If you're pull this library from a CDN, try changing the url to "https://" instead of "http://" and this should fix it.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I've made a website and want to put it online now. It's already on a domain but there's a problem. The URL's are still weird and I want to fix it but don't know how.
How it is: example.com/webroot/page/home - example.com/webroot/page/membersInfo/infoBulletin/
How it should be : example.com - example.com/membersinfo/infobulletin
I want to get rid of /webroot/page/
Thanks a lot,
Anona
I use : Twig, Html, Bootstrap, JS, JQ, CSS, PHP, JSON
A simple solution is to move your site folders to be up two directories, but I would discourage that. If you can keep your site files organized (i.e. use subdirectories), your life will be easier.
I think something like this in your .htaccess will work. If not, let me know and I'll play with it:
#if rewrite engine isn't already on
RewriteEngine On
#in case you only want to rewrite the path for say, mysite.com, then use this RewriteCond as well
RewriteCond %{SERVER_NAME} mysite.com
#rewrite the request at the root to have /webroot/page in front of it
RewriteRule (.+) /webroot/page$1 [L]
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want to inject some javascript code during HTTP call before onload function is called. How can i achieve this? Is this even possible? I have achieved this using Chrome extension but I want some other method to inject the code which works on all other browsers. Something like injecting through URL bar while opening the page. This injection should work on all sites i open.
Another neat way to do it is to have the request go through a proxy. The proxy can inject the JS by modifying the source before it arrives to the client. This is how most ad-supported (ad-injecting) proxies do it.
But then, it's a security issue. That's why, same as having to tell the user to install an extension, you'd have to tell the user to use the proxy. Otherwise, it's a no-go.
Yet another way to do it is to have the owners of the site embed a script that points to a permanent path. Behind that path, you write any script, which may include one that loads some more scripts.
It can be as short as:
<script src="path/to/a/permanent/location"></script>
This is typically how publicly served APIs work, like Google. Behind their permanent path is a script that loads all the APIs you need. But, like I said in the previous section, you need permission, this time in the form of having the webmaster embed the script on the page.