I've just deployed an "alpha" version of my ReactJS SPA using Firebase Hosting, and I've been trying to figure out a way to test SEO using Google Search Console. When I enter the url and click continue, it gives me the TXT record that I'm supposed to "copy into the DNS configuration" for my url. The "url prefix" wants me to upload an html file for verification, but I don't think this is even possible with a single page ReactJS app. I've done quite a bit of searching online, but haven't found a definitive solution. Could somebody please explain the proper way to do this? Thank you.
I know there are different ways to connect your website files to google search console for verification. For me I opted for the older method that didn't need DNS configuration. All you had to do was copy in a meta tag you'll be given
Related
I've written some code that retrieves some data from google sheets then updates some content on my google sites. However, while the script works (when run on localhost) I encounter the
"details": "Not a valid origin for the client: https://966655698-atari-embeds.googleusercontent.com has not been whitelisted for client ID MY-ID. Please go to https://console.developers.google.com/ and whitelist this origin for your project's client ID."
However, I enabled this for localhost, cleared my caches. The problem is the 'https://966655698-atari-embeds'. Each time the google site loads it generates a new random number sequence. Does anyone know how to workaround this? The google site uses embedded html which I believe is why the initialization failed.
I have tried to white-list https://googleusercontent.com which didn't work (I didn't think it would because the domain changes) but I'm honestly incredibly stumped.
Google hosts all user content using their somedomain.googleusercontent.com. I do not know for certain, but I'm almost sure that to save space they dynamically host their content, meaning that when the embedded html does not need to be actively hosted, it isn't. I had to find a way to host from a site that would always send the request. For me, I found that github pages was the answer.
I found this on adobe's website which somewhat explains what googleusercontent does. https://helpx.adobe.com/analytics/kb/googleusercontentcom-instances.html
To set up github pages this link will explain how to do so https://guides.github.com/features/pages/
You can add this to the developer Google console relatively easily and any connection will submit from your username.github.io. (I believe it also uses https protocol). It also allows me to implement directly using git version control and implements nicely with WebStorm.
I need to write a script that allows me to automatically download the "Global Key Report" from Tableau Customer Portal without manually logging in and clicking on the link.
Here is the link explaining how to download this report manually:
http://kb.tableau.com/articles/howto/managing-tableau-product-keys.
I know that there are commands such as wget or similar options to download a file, but I'm not sure how I can use something like wget in this case.
If I know the URI or be able to figure it out, then I can go ahead and figure out about the code. My preference however, is Python or Javascript, which I'm not familiar with.
Sorry if this question seems so weird or simple, but I have minimal experience with writing codes to download files from the web.
I looked at other similar posts, but was unable to understand anything.
Your help is much appreciated in advance.
There may be a way to do it without using an API, but imho it will be complicated. The steps involved would be this:
Use the developer tools of your browser to observe the POST request that is sent to the server when you manually download the file.
Analyze the request to see how to modify and abstract it.
Either write a Bash script that uses wget to download the file OR write a python script to download the file. Bash will be faster and more concise, python will be more complicated but give more flexibility.
I will follow up with some details and pointers and possibly an example. I hope this helps with the general direction right now.
Ok so an update to this answer to explain step 1. As an example I'll download some public airport arrivals data from the Bureau of Transportation Statistics, which can be found at this address: https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236
I use this example because that is how I learned POST-analysis-and-download ingest myself. I'll assume you are using Chrome or Chromium.
Go to the URL above.
Press Ctrl-shift-i to open developer tools
Go to the "Network" tab and tick "Preserve log"
On the website, click the "Download" button. You'll be offered a Zip file for download. Also, data will start to appear in the developer tools.
Under "Name" click on the entry that is like: Download_Table.asp?Table_ID=236&Has_Group=3&Is_Zipped=0
In the section that opens, look at the "Headers" tab. Look for the entry "Form Data". There you can see the details of the request.
Click "view source". This will show you the full POST request parameter string that leads to your download.
8 Write code in Bash using curl or wget, or write a python script that generates a post request for a download using a string with the correct parameters. If needed I'll get into details about that at a later time.
Hope this helps. When I have more time I'll be updating this answer to include an example how to download with Bash / curl.
Ok, so since all the answer I got to this question is a negative vote, I'm going to post an answer myself. To download something directly from the web, you need an API, meaning that the publisher website, for instance, should give you coding means or handles so to speak that enables you to connect to the object, data, etc. you want to pull from the web.
In this particular case, as my luck would have it, there is no designated API to automate the downloading process.
After doing some research, I realized that Tableau connects to Salesforce to get the data, so I thought the Tableau people might be able to give me API details and it would be something similar to how people automate downloading reports from Salesforce.
I contacted the Tableau support team and they said currently there is no API to automate downloading the all keys report.
They directed me to something they have created called Web Data Connector, and I am trying to see if I can automate pulling the report through that.
There are also ways that allow web scraping and pulling the data directly from the HTML table representing it in the webpage using Python. I'm not sure if that's possible, but I'm working on both solutions. Will update this post if I make any progress.
i have read a lot of stack-over-flow Q&A and blogs about deep linking, the way i found convenient was to use header() function in my php code and use url within it to redirect. But when i put myApp:// in header function like this
header('Location: myApp://');
as redirect link it says
cannot open the page because too many redirects occur
then did some research and found this link
then according to the link, i used javascript method as
echo '<script type="text/javascript" charset="utf-8">window.location="360VUZ://";</script>';
to achieve the same
still this doesn't work as it changed the whole url i needed to open the app
it says
The requested URL /subFolder/myApp:// was not found
now i am not getting how the path is appending itself, all i need is to hit this
myApp://
url to open my application
any type of help or suggestion is appreciated ! thanks in advance!!
EDIT:
specifically when i am using JS to open app, it says
The requested URL /subFolder/myApp:// was not found on this server.
so all i need to do is somehow delete/remove the prefix path as it is finding the app on the server not on device and hit only 'myApp://', but still dont know how!? please help me!
With iOS 9.2, Apple deprecated the ability to launch apps via URI schemes in favor of Universal Linking. Consequently, this approach that you are exploring no longer works.
Your options are to set up Universal Linking on your own or to leverage a third party service like branch.io.
I've created a trivial prototype app on Facebook. When my test script (JavaScript on Ubuntu command line, powered by Node.js) tries to access the app, it produces this error message:
{ error:
{ message: 'Invalid OAuth access token.',
type: 'OAuthException',
code: 190
}
}
So I'm trying to debug using Facebook's lint debugger. However when I paste the app's access token into lint, it responds with:
Failed to get composer template data.
I have no idea what this means, and a lengthy stumble through Google reveals page after page of people who are similarly clueless.
Has anyone seen this error, and fixed it?
Details about the app:
It's configured to ask for read_insights and manage_pages alongside standard permissions. No other permissions are requested.
Settings, Basic: I've had to put a nonexistent URL in the secure canvas URL, since I don't have any SSL hosting anywhere. The non-SSL canvas URL is complete and points to an existing page.
"App Info" is all filled in (apart from Tagline which is optional).
I haven't submitted the app for approval for public use, and there are no "items for approval". I'm going to be the only person who ever uses it, this isn't necessary for this app.
Switching from "live" to "sandbox" and back again doesn't make any difference.
There are no warnings anywhere on the app developer page.
A client I worked for was experiencing a similar issue - when sharing certain URLs on Facebook, the Facebook Sharer wasn't picking up any of the thumbnails. Frustrated with that, the client was trying to clear the Sharer's cache using the debugger at https://developers.facebook.com/tools/debug/, hoping that this way Facebook will re-cache the page and display the corresponding images.
However, in doing so, the client was seeing the ambiguous "Failed to get composer template data." error, and resorted to me for a solution.
I did my research, and it turned out that Facebook had decided to block the domain of the CDN that my client was using to serve images from. Since the pages were loading all images from that CDN, none of the images were getting picked up and the debugger was returning that "Failed to get composer template data." error.
The moment we started serving the images from a new CDN, Facebook started picking them up correctly, and the error disappeared.
Hope that helps you!
P.S. Please note however, this is not a permanent solution if you are violating Facebook's terms in some way. Yes - Facebook's spam prevention algorithms do return false positives sometimes, but most of the time they have a pretty good reason to block your content.
P.P.S. Worth noting, in the case I'm describing, when we passed the CDN URL to the debugger, it returned "This link is blocked, or you have triggered an excessive amount of scrapes. If you think you're seeing this by mistake, please let us know."
I had same error, "Failed to get composer template data.".
I believe my path to Images was blacklisted by Facebook. Workaround was to create virtual path that points to Images folder. Then I could reference /Images with /OGImages virtual directory. Then I no longer received the error.
Had kind of the same problem, I figured out I needed to use HTTPS instead of HTTP for the image link, and everything went fine then.
Hope it may help !
I had just the same problem and it appeared suddenly after several months without any site changes except content. First I thought the Facebook spam filter had blocked our site, as suggested by a Ycombinator comment thread but then I found the real problem.
In fact it was the official Facebook Wordpress plugin that was acting up. Disabling it meant that the Facebook debugger could once again fetch our data and sharing started to work immediately.
In my case is was a "Facebook Share Buttons" plugin for Wordpress. I've deactivated the plugin and resolved the issue.
I can't seem to operate from my localhost. I'm trying to use the FB javascript SDK. My URL is http://localhost:8888. In my App settings I have already defined my site URL as "http://localhost:8888". The domain I specified was "localhost". I'm trying to pull my name using FB.api but it keeps on showing "undefined". I wonder if anyone has this problem too.
I am operating on my Mac, using MAMP.
I've tried not using the SDK. I tried just using client-side validation as described here:
https://developers.facebook.com/docs/authentication/
And it works fine. I am able to display my name. But there's just something up with the SDK I can't figure out.
Help appreciated.