My team hosts a set of central web pages that is used by many different organizations. These pages change their look and feel (fonts, images, etc.) based on which organization calls the page. This is determined by a custom HTTP Request Header: "organization". I am building a test site to test the look and feel of ALL of the different organizations.
My plan was to have a web site with a drop down where our QA people can choose an Org then click something (button/link) to open the central web pages with the look and feel for that org. Note that when calling these central pages the URL in the browser MUST change to the URL of the page. So far all ideas/samples I can find involve getting the page content from the remote server and displaying it IN the calling page (URL does NOT change). Bottom line is I need to be able to set HTTP Request Headers then open a new URL with those headers.
I can use JavaScript, ASP Classic, Java and/or other similar technologies/languages. Any ideas to get me started?
I did find some similar questions to mine but none of them allow the URL to change in the browser so it doesn't work.
EDIT:
OK so it seems to not be possible via code. We cannot use a proxy since corporate has locked our browsers down and we cannot change proxy settings (even on dynamically created browser profiles). So is it possible to add custom HTTP Request Headers in IIS Express? If so I can write java test cases that modify the config files of IIS Express then start the server and load a central test page that redirects to the appropriate pages. Can this be done?
I am dying to know how the architect of this solution intended for end users to add headers indicating their organizational membership. Corporate proxy perhaps? Who knows. Seems cray cray. Whatever the intention was, you should be trying to adhere to it, if you want an accurate test.
But if can't do that, and your sole need is to perform testing under the assumption that the headers can be added somehow (just not by you), and you have control over your browser, you can use an add-on, such as Modify Headers for FireFox.
You could also build something separate to act as a proxy and add the headers, such as the solution to this question. Point your browser at the proxy and voila.
Related
I'm trying to scrap the data from the website using file_get_contents but instead of the webpage source I'm getting following code:
<body onload="challenge();">
<script>eval(function(p,a,c,k,e,r){e=function(c){return c.toString(a)};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('1 6(){2.3=\'4=5; 0-7=8; 9=/\';a.b.c()}',13,13,'tax|function|document|cookie|ddosdefend|1d4607e3ac67b865e6c7263260c34e888cae7c56|challenge|age|0|path|window|location|reload'.split('|'),0,{}))
Engine is wordpress. Is there any chance to get real source?
file_get_contents seems to work fine. However, it seems you are not served the desired content but some JavaScript code which needs to be evaluated before redirecting to the content.
This might be because the website you want to scrape uses a DDOS protection (e.g. something like CouldFlare) which detects your simple scraping attempt.
Usually, a DDOS protection service is a proxy between the original webserver and your scraper. It inspects your request behavior, user agent etc. and based on that serves you either the original webserver's content or presents you a challenge (e.g. captcha, or simply requires you to evaluate javascript etc.).
If you can get the IP address of the original webserver, you might be able to directly access it. The DNS resolution for the webserver's name will direct you to the proxy, so you have to look elsewhere. Alternatively, use a web scraping library that emulates real browser behavior in PHP.
Every now and then I hear an opinion that having the same URL for non-Ajax and Ajax action is bad.
On my app, I'm having forms that are sent with Ajax for better user experience. For people who disable JavaScript, my forms work too. Same goes with some of my links. I used to have the same URL for both and just use appropriate content and Content-Type, according to whether it's an Ajax call or not. This caused problem with Google Chrome: Laravel 5 and weird bug: curly braces on back
My question now is - is this REALLY bad idea to have the same URL for Ajax and non-Ajax actions? It's painful to make two separate URLs for each of those actions. Or maybe is there a good workaround to manage caching? In theory, one header can change the behavior entirely, so I don't see why should I create extra layer of my app and force the same thing to have separate URL.
Please share your opinions.
HTTP is flexible and allows you to design the resources the way you want. You design the APIs and designing comes to personal preferences. But in this case, having one resource that responds to different types of request is absolutely fine. This is why the HTTP headers like Content-type exists.
And for the caching you can use HTTP Etag header. It's a caching header that forces the client to validate the cached resources before using them.
The ETag or entity tag is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for web cache validation, which allows a client to make conditional requests. This allows caches to be more efficient, and saves bandwidth, as a web server does not need to send a full response if the content has not changed
I am building an application that will allow user to open a word document through a web page. This web application will open the word document using the local word instance on the machine.
I have two working solutions.
Using ActiveX (Only on IE)
Since the application is intranet application, I am using PsTools in the web service to remotely open word instance on remote machines.
The second architecture is what I am following right now. It is based on a Web Service which receives machines name through Javascript/jquery call. Later in the web method I am using PsTools to remotely execute MS Word instance on remote machine.
Both the architecture works, but both of them have limitations. With ActiveX I can use it on IE and it also requires changes in network policy to allow ActiveX. With PsTools, it is working great but I can't get the path of Word.Exe and I can only assume that it would always be at \\machinename\C$\Program Files(x86)\.....
We might make this application public as well and in that case our solution with PsTools will not work anymore.
I was just wondering if there is any other more suitable/cross browser way to open local word instance through web application ?
The document has to be modified on a remote location, one option would be to let the user download the document, then modify it and upload it to the server, this is out of question since we are replacing a thick client and wants to keep the user experience same
I am building an application that will allow user to open a word
document through a web page.
If it is an Intranet scenario, then you could use application protocol with Office URI schemes for links to the documents which will then open in the locally installed client.
The Office URI schema is like this:
<scheme-name>:<command-name>"|"<command-argument-descriptor> "|"<command-argument>
For Word specifically, an example would be:
<a href='ms-word:ofe|u|https://example.com/example.docx'>Edit</a>
Where, ms-word: is the scheme, ofe command stands for open-for-edit, the u is the command-descriptor to use the URI that follows, and finally the URI to the document itself. There are other commands like ofv (open-for-view), and nft (new-from-template), and also other command-descriptors like s for save.
Here is the complete reference: https://msdn.microsoft.com/en-us/library/office/dn906146.aspx
The protocols are registered with Windows when the Office client is installed.
You could enable WebDAV easily on your IIS server. The WebDAV client is built-in with Windows at the client-side.
You can also use components like FFWinPlugin Plug-in which is part of the SharePoint Foundation, or OpenDocuments Control which is an ActiveX control installed along with the Office client.
We might make this application public as well
I would discourage you from doing that, unless your company owns or deals-with services like OneDrive or Office.com. This can quickly get tricky as mentioned in the other answer. Moreover, enforcing a proprietary client on general public is not a good idea anyway. Further, even Microsoft's own solutions do not work reliably across browsers and work best with IE only (even Edge has problems with this), which would be forcing a specific browser to general public. Not a good idea.
However, if you really need to, then it would be better if you could use some of the solutions already built around WebDAV. Alfresco ECM (enterprise content management) is one example of public offering which uses WebDAV similar to your use-case.
There is another one by IT Hit and a live demo is here: http://www.ajaxbrowser.com. They also have a basic tutorial on how to setup your own WebDAV server on the same lines as your use-case. You will need to find their documentation.
When you say: "We might make this application public as well", what kind of scale are you talking about? Just a couple of folks from the a team, or as a real web application that needs to deal with edit conflicts, transactions, locking, performance, etc.? Even the intranet solution you mentioned will likely become a headache as soon as 2-3 people start to edit the same document.
For this type of document sharing, you basically have two options:
Significant investment in a rich web UI that behaves similarly to MS Word, with back-end services that will store the info in a scalable data store and provide simultaneous edits and document downloads, or
Integrating with a third party vendor API or white-label provider that offers similar capabilities for a fee. E.g. Box.com APIs, HyperOffice, FirePad, etc.
This would be a super-simple problem to solve if you can convert the document in question to a type of form. There are probably a hundred different services that offer embedded forms functionality with excellent reporting and database management. If a document in Word format is needed, then your app would just convert the stored data to a .doc/.docx document for users to download at will.
Whatever direction you go with, try to get out of the PsTools-based current setup. It's like a rinkydink house of cards and as #Matt-Burland mentions, likely to cause a security disaster pretty soon.
I'm making a mobile app using the PhoneGap framework, which is to say that the entire app is written in HTML, CSS, and JavaScript.
Part of the app requires me to fetch some information from a remote database.
I've spent the last hour reading up on how to make an XMLHttpRequest() to a remote domain, and I can't figure it out for the life of me.
As a bonus, since the goal of the request is to retrieve some database content, I need to send 3 parameters to the server for querying with.
I keep seeing things about the same-origin policy, but I can't find anything clearly saying whether it would apply to a phonegap app which has no actual host. I've also seen about 6 fairly overcomplicated workarounds. Before I go to the trouble of implementing one of those, I'd like to confirm that there isn't nowadays some simple way of doing this. Can anyone show an example, if so?
The same origin policy does not apply when you are running your XHR from the file:// protocol of the mobile device. Here is a small example I used to show how to make a XHR request to twitter.
http://simonmacdonald.blogspot.ca/2011/12/on-third-day-of-phonegapping-getting.html
I apologize that there is a similar question already but I'd like to ask it more broadly.
Is there any way at all to determine on the client side of a web application if requesting a resource will return a 401 status code and cause the browser to display an ugly authentication dialog?
Or, is there any way at all to load an mp3 audio resource in flash which fails invisibly in the case of a 401 status code rather than letting the browser show an ugly dialog?
The Adobe Air run-time will suppress the authentication if I set the "authenticate" property of the URLRequest object but this property is not in the Flash run-time. Any solution which works on the client will do. An XMLHttpRequest is not likely to work as the resources in questions will be at different domains.
It is important to fail invisibly because the application will have a list of many audio resources to try and it makes no sense to bother the user to try and authenticate for one when there are many others available. It is important that the solution work on the client because the mp3's in question come from various servers outside my control.
I'm having the same problem with the twitter api - any protected user requires the client to authenticate.
The only solution that I could come up with was to load the pages serverside and return a list of the urls with their http response code.
"Is there any way at all to determine on the client side of a web application if requesting a resource will return a 401 status code and cause the browser to display an ugly authentication dialog?"
No, not in general. The 401 response is the only standard way for the server to indicate that authentication is necessary.
Just wrap your access to the resource that might potentially require authentication to an Ajax call. You can catch the response code, and use javascript to do whatever you want (ie. play that sound). If the response code is however alright, then use javascript to forward user to the resource.
Most likely this approach will generate slightly more load on server (you might have to resort to loading the same resource several times in some circumstances), but it should work. Any good tutorial about how to use XMLHttpRequest should contain all you need. Take a look at for instance http://www.xul.fr/en-xml-ajax.html
If you are using URLRequest to get the files, then you are running across more than just elegant error handling, you are running into a fundamental difference in the Flash and AIR run-times.
If using the URLRequest object to retrieve files you are going to get a security error from Flash on every request to every server that has not set a policy file to allow these sort of requests. AIR allows these requests since it basically IS the client. This makes sense since it's the difference between installing an application and visiting a web page.
I hate to provide the non-answer, but if you can't make a server-side call, and you are hitting a range of "not-known" servers, it's going to be a tough road to hoe.
But maybe I misunderstand, are you just trying to Link to the files and prevent the user from getting bad links, or are you trying to actually load the files?