Linking to detail pages in Rally custom apps - javascript

I have been using the following code to insert a hyperlinked FormattedID into my grid. I can't use the standard formatting template because my grid has both User Stories and Features. When I click one of the links it takes me to a blank page (with the Rally wrapper). If I copy and paste the URL into the nav bar everything works perfect so I know the link isn't bad. The error the page is throwing is "TypeError: mainWindow.Rally.alm is undefined".
var idLink = i.get('FormattedID');
if (idLink.match('US')) idLink = '' + idLink + '';
else if (idLink.match('F')) idLink = '' + idLink + '';
Also, if I take out the "target='_blank'" option the details page loads fine. But I would rather leave it in since my apps run within an iframe. It looks a little silly having a Rally page (wrapper and all) load within another Rally page. Any help would be appreciated!

We currently know this is a pretty big hole in our public API. The unreleased head revision of the SDK has better support for rending links in apps (Rally.util.DetailLink).
Look for it in the next preview version of the SDK and/or the GA.

Unfortunately this isn't a supported use case detail pages, although I can certainly see how this would be a useful page rendering option.
I'd recommend posting this as an Idea out on Rally Ideas, so that other Rally customers can vote on it and gain visibility and traction as a feature request in the product.

Related

Third party script wont load until page refresh

I run into a problem with my blog that I've been writing content for months now. I'm using Gatsby v2 and the Netlify CMS v2 and host the entire blog with the help of Netlify and Github.
I've asked for help a few days ago but the thread got deleted due to not enough clarification.
So, I'm trying again now.
I'm using this starter: https://github.com/thriveweb/yellowcake , and haven't changed much besides CSS yet.
I'm trying to improve my blog by updating the /src/templates/SinglePost.js template for all my blog posts by adding share buttons from Addthis.com. I've successfully added their code (script) to my blog:
<script type="text/javascript" src="//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-545927b3c48573a"></script>
by using Netlifys Snippet injection option. This option allows to inject analytics or other scripts into the HTML of the site before </body> tag.
but since its a static website, If I enter the website from the home URL it doesn't load the script on other pages. I have to refresh (reload) the blog post in order to see the share buttons if I'm coming from the homepage or any other page from the website. Is there a way to refresh the blog posts automatically when a user enters the blog post when he's coming from the homepage?
Looking for solutions :)
You can try something like
<body onload="addScript()">
function addScript() {
var my_script = document.createElement('script');
my_script.setAttribute('src','http.....');
document.head.appendChild(my_script);
}

How to scrape the javascript portion of a webpage?

I'm trying to scrape some site in Node.js. I've followed a great tutorial however realize that it might not be what I am looking for, ie. might be looking at scraping the javascript portion of the page instead of the html one.
Is that possible ?
Reason for that is that I am looking for loading the content of the below portion of the code I could find by inspecting in Safari (not showing in Chrome) a kayak.com page (see url below) and seems to be in a scripting section.
reducer: {"reducerPath":"flights\/results\/react\/reducers\/
https://www.kayak.com/flights/TYO-PAR/2019-07-05-flexible/2019-07-14-flexible/1adults/children-11?fs=cfc=1;legdur=-960;stops=~0;bfc=1&sort=bestflight_a&attempt=2&lastms=1550392662619
UPDATE: Unfortunately, this site uses bot/scrape protection: tools like curl get a page with bot warning, headless browser tools like puppeteer get a page with captcha.
===============
As this line is present in the HTML source code and is not added dynamically by JavaScript execution, you can use something like this with the appropriate library API:
const extractedString = [...document.querySelectorAll('script')]
.map(({ textContent }) => textContent)
.find(txt => txt.includes('string'))
.match(/regexp/);

Google does not showing my page in search results when I use js src to get content from php page

in the name of GOD. I have a js code in a page in website 1, for showing content.
<script id="mdval" src="http://web1.com/api/pc.php" type="text/javascript" dval="sourceval">
with this method I received articles and news from database in website 2 in other server.
I dont have any problem about receiving content from website 2 and News and articles showing is very good in website 1 page. But problem is here that Google not showing website 1 page in search results. I test it in Google search after a month, but not shows the page.
Please note that I can use just client side methods with AJAX.
UPDATE:
I tested page in Fetch as Google tool. I selected fetch and render for a page that uses ajax. googlebot saw the page just like visitors with all contents and images. I click on "submit to index" for indexing page. Now after one day I searched URL in google and google listed it.
But Now problem is here that google just shows url and meta description in search results and not shows any content of the page.
please search this link in google. www.neginkoodebasir.ir/more?naapi= گیاهان مورد استفاده در تهیه کود سبز (بخش دوم)
image 1
image 2
The google bot crawler does not parse your data that come from Ajax, it's only crawl the source of the page (CTRL + U on Chrome Windows).
Try to see with https://developers.google.com/webmasters/ajax-crawling/docs/learn-more
You can also try to put the data from your PHP script on cache and load the cache.
And every X hours, reload the cache.
Try to see with this tool : support.google.com/webmasters/answer/6066468?hl=en
it seems that Google CAN crawl your scripts: "Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers. To reflect this improvement, we recently updated our technical Webmaster Guidelines to recommend against disallowing Googlebot from crawling your site's CSS or JS files." https://developers.google.com/webmasters/ajax-crawling/docs/learn-more
Try to see with this tool: support.google.com/webmasters/answer/6066468?hl=en

Redirect Users Clicking on a link in your site that goes to an offsite URL

We are changing our social account names/urls. I'll have to go through our websites, emails, etc... and manually change hundred+ number of links. I think our CMS might have the ability to do most of the work on the web, but let's assume it won't for fun.
Since social sites live outside of our domain (facebook, twitter, etc...). I was toying with the idea of replacing the old urls with the new url with jquery or Js. Such as:
$("a[href^='http://OldSocialLink1.com']")
.each(function()
{
this.href = this.href.replace(/^http:\/\/newSocialLink1\.com,
"http://OldSocialLink1.com");
});
Is there a better way to do this-besides changing all the links manually? Possible onclick - If you are on one of our pages and you come across an old social link to facebook and clicked it. Change oldFBLink to new link.
I know of a good search & replace utility available on GitHub here. All you need to do is save it and transfer it to the root directory of your site and navigate to it at 'yoursite.com/migrate.php'
Say if you wanted to change facebook links from 'facebook.com/oldurl' to 'facebook.com/newurl', simply input the new and old URL into the script page. I know this script has been very useful to me while migrating WordPress sites which requires old URLs to be replaced with new ones. Hopefully this script can help you as much as it helped me!

Injecting HTML into existing web pages

I'm interested in the concept of injecting a bit of HTML into existing web pages to perform a service. The idea is to create an improved bookmarking system - but I digress, the specific implementation is unimportant. I'm quite new to web development and so I have no definite idea as to how to accomplish this, thought I have noticed a couple of possibilities.
I found out I can right click > 'inspect element' and proceed to edit my browser's version of the HTML corresponding with the webpage I'm viewing. I assume that this means I can edit what I see and interact with. Could I possibly create a script that ran from a button on bookmarks bar that injected an Iframe which linked to a web service of my making? (And deleted itself after being used).
Could I possibly use a chrome extension to accomplish this? I have no experience with creating extensions and so I have no clue what they're capable of - though I wouldn't be against learning.
Which of these would be best? If they are even valid ideas. Or is there another way that I've yet to know of?
EDIT: The goal is to have a user click a button in the browser if they would like to save this page. They are then presented an interface visually independent of the rest of the page that allows them to categorize this webpage according to their interests. It would take the current link, add some information such as a comment, rating, etc. and add it to the user's data. This is meant as a sort of side-service to a website whose purpose would be to better organize and display the browsing information of the user.
Yes, you can absolutely do this. You're asking about Bookmarklets.
A bookmarklet is just a bookmark where the URL is a piece of JavaScript instead of a URL. They are very simple, yet can be capable of doing anything to a web page. Full JavaScript access.
A bookmarklet can be engaged on any web page -- the user simply has to click the bookmark(let) to launch it on the current page.
Bookmark = "http://chasemoskal.com/"
Bookmarklet = "javascript:(function(){ alert('I can do anything!') })();"
That's all it is. You can create a bookmarklet link which can be clicked-and-dragged onto a bookmark bar like this:
Bookmarklet
Bookmarklets can be limited in size, however, you can load an entire external script from the bookmarklet.
You can do what you refer to as like an <iframe>, so here are some steps that may help you, simply put:
Create an XMLHttpRequest object and make a request for a page trough it.
Make the innerHTML field of an element to hold the resultString of the previous request, aka the HTML structure.
Lets assume you have an element with the id="Result" on your html. The request goes like this:
var req = new XMLHttpRequest();
req.open('GET', 'http://example.com/mydocument.html', true);
req.onreadystatechange = function (aEvt) {
if (req.readyState == 4 && req.status == 200) {
Result.innerHTML = req.responseText;
}
};
req.send(null);
Here's an improved version in the form of a fiddle.
When you're done, you can delete that injected HTML by simply:
Result.innerHTML = '';
And then anything inside it will be gone.
However, you can't make request to other servers due to request policies. They have to be under the same domain or server. Take a look at this: Using XMLHttpRequest on MDN reference pages for more information.

Categories