I'm working on my first google chrome extension. One of the things that concerns me is that I may need to perform regular updates since the extension is quite intricate.
DETAILS:
-The extension uses content scripts to manipulate the DOM. The content scripts manipulate the DOM by injecting web components.
- My extension relies on some external dependencies such as underscore, backbone and jQuery
In brief, I've heard about about the Web Store API but I am interesting in any thoughts about this topic.
How does one typically handle updates for external dependencies such as jQuery, etc?
The correct way is to include those external depencencies as part of your code and not link to the cdn or external repository. This is also explained in the official docs.
If for some reason you must update the library just publish a new version of the extension taking into account that it normally takes a day until chrome upgrades your users.
There are a few exceptions. For example google recommends that their google analytics library be linked and injected. Its slower and less efficient but some libraries ask you to do it.
Related
What are the options for implementing a custom UI for searching the alfresco repository?
I have found only customizations of the Web Scripts share which is more of a WCM thing. Could it be implemented and expanded for Custom Model searches from imported CMIS data?
Has anyone built a custom UI for communicating with the 5.0 or 5.1 alfresco repository?
Any help or search paths would be greatly appreciated.
It's up to you, really.
Latest versions of Alfresco have a nice and documented REST API, which you can consume. Additionally, web scripts you might create are also easily accessible with a simple HTTP request, so customizing is not a problem.
https://api-explorer.alfresco.com/api-explorer/
The latest thing is what Gagravarr already mentioned, Angural2 based components (which also speak with the above mentioned REST API).
Here is a blog post with almost the exact title as your question. The short answer is you can use whatever you want to build a custom app on top of Alfresco.
Yes, there are Angular2 components that will be available some day, but for now, they rely on REST API changes that have not been shipped in any stable release of Alfresco, including Community Edition. They require an early access release (201606-EA or higher) which you should not run in production.
So from whatever language you decide to use you'll be making REST calls. But to which API? There are many. Here is the order of preference you should use when selecting an API for Alfresco.
CMIS. Grab a library from Apache Chemistry.
Public REST API, see http://docs.alfresco.com/5.1/pra/1/topics/pra-welcome.html
Out-of-the-box web scripts marked "Public". See http://localhost:8080/alfresco/s/index for a list, then click down to an individual web script until you see its lifecycle.
Your own custom web scripts
Out-of-the-box web scripts with no lifecycle or something other than public.
That last one is truly a last resort. Don't do it without being fully aware that you are writing against an API that will change without warning.
My latest update to a Firefox addon has been rejected because I've used a custom jquery-ui (generated by their site with just the widgets I wanted) and it fails their checksum check.
Your add-on includes a JavaScript library file that doesn't match our
checksums for known release versions. We require all add-ons to use
unmodified release versions, obtained directly from the developer's
website.
We accept JQuery/JQuery-UI libraries downloaded from
'ajax.googleapis.com', 'jquery.com' or 'jqueryui.com'; and used
without any modification. (file-name change does not matter) I'm
sorry, but we cannot accept modified, re-configured or customized
libraries.
Fair enough, I could just download the full one and resubmit, but I was wondering if it is possible to link to one instead?
If I try this:
contentScriptFile: [self.data.url("https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"), self.data.url("https://ajax.googleapis.com/ajax/libs/jqueryui/1.11.3/jquery-ui.min.js"), self.data.url("api.js")],
I get an error at runtime telling me that content scripts much be local. Both google and the API seem to be proving illusive to me for an answer.
Does anyone know if this is possible and how?
Cheers
Rich
self.data.url("https://...")
It seems like you haven't read the documentation on data.url()
It clearly states that
The data.url() method returns a resource:// url that points at an embedded data file.
Which means you cannot link to an external resource.
Does anyone know if this is possible and how?
No, contentScriptFile runs with (slightly) elevated privileges compared to regular web content, that's why you are not allowed to load scripts from sources that might change and could theoretically inject malicious code in the future.
If you want to rely on external libraries and keep them up to date you could simply write a little build script that always downloads the newest version when building a new XPI.
In principle you could just load the script via privileged XHR and then pass it as string, but that's probably not gonna pass AMO review.
And piece of personal opinion: Since you're targeting a specific browser you don't really need jquery for its cross-browser logic, modern web APIs provide lots of convenience methods that you can get pretty far just with vanilla ES6-javascript and state-of-the-art DOM APIs. Iterators and arrow functions also make bulk operations fairly concise.
Call a standalone script by a SCRIPT in a Google Docs (Document)
The aim would be to create a few sort of "Add-on Custom" which is only my documents created with Google Docs Prototype containing a script with a few command line calling a standalone script .
Hi,
Right now, only container-bounded scripts can use "advanced" interactions (create menus, prompts, etc) on a container Spreadsheet, Docs or Forms.
I'd like this ability on a standalone script!
The primary use-case for this is to allow developing and maintain a single script that is used in multiple documents. Because right now, if we have a script that does some nice things on a Spreadsheet (or any other container) we face two major problems.
First, it's very difficult to distribute your script. It often involves multiple steps a end-user have difficulty to do, or have them create the document from a template you setup previously.
And the second problem is maintaining/updating a distributed script, because we have one independent copy on each file. And even if you have access to all files, updating is a nightmare. Even if you use libraries and just need to get in to each one to update the library number (since the library development mode only works if all the users have edit permissions on your library, which is crazy).
If we could have a single standalone script that we, the developers, could control the update/deployment version for all our users/documents, just like we do for web-apps, it would be great!
Lolo
Its called apps script libraries. Check the docs for more detail.
https://developers.google.com/apps-script/guide_libraries
I've inherited a high-traffic site that loads some Ext javascript files and I'm trying to trim some bandwidth usage.
Are Ext libraries necessary for development only or are they required for the finished site? I've never used Ext.: Ext JS - Client-side JavaScript Framework
The site loads ext-base.js (35K), ext-all-debug.js (950K), expander.js, exteditor.js. It appears that expander.js and exteditor.js have some site specific code, so they should stay?
But what about ext-base.js and ext-all-debug.js? Am I reading this correctly - are base and debugging libraries necessary for a live site?
Simply consult the documentation the previous developers have written for you. :P
To actually answer your question: You will more than likely want to keep all of the files available. You might however want to change ext-all-debug.js to ext-all.js since the debug file contains non-minimized Javascript.
The previous posters are correct that if the site is actually using ExtJS, then you will need to keep the references to ExtJS. Assuming that you actually need to keep the references, replacing ext-all-debug.js with ext-all.js will save some bandwidth. Additionally, consider using one of the CDNs available now. For instance, using Google's CDN, you will save not only your own bandwidth, but bandwidth for your client and decrease page load times.
ExtJS files are available to be hosted on the Cachefly CDN: Ext CDN – Custom Builds, Compression, and Fast Performance.
Hosting the files remotely should remove the load for at least those files.
As to which you can safely remove, you need a JavaScript developer to work on documenting what's truly necessary to your application.
As to what ExtJS is, it's a JavaScript library and framework - a la jQuery, YUI, MooTools, PrototypeJS, etc. So indeed, it can be critical to your site if your site relies on JavaScript to work.
I don't know much about Ext, but I think it's to assume that expander.js and exteditor.js depend on ext-base.js and ext-all-debug.js. As such, removing the latter two will break the site functionality.
The only thing I'd change would to switch from the debug version of ext-all to the production (which is most probably called ext-all.js and you should be able to load it from the same place the debug is located or from the Ext site).
One option would be to condense all of those files into one file (it would be larger, but it would reduce the overhead of multiple HTTP requests). Also verify that the server is sending the ETag and Expires headers, so that the browser can cache as much of it as possible...
Is it possible for the javascript you write for a XUL component to interact with the javascript defined in a webpage?
Eg, if a particular webpage has a dooSomethingNeat() function, can I have a button defined in a XUL overlay execute that function, or does it live in another namespace?
Put another way: if I'm looking to enhance the functionality of a website via my own code, does it make more sense to write a Firefox extension or use something like greasemonkey?
See my answer to another question here.
The webpage code does live in a 'namespace' separate from the scopes the browser code executes in.
It doesn't mean you can't access it from an extension, though.
On the other hand, running a function in a content page is not very easy to do securely at this moment.
Greasemonkey scripts (and ubiquity scripts, which can also interact with web pages) are somewhat easier to develop than extensions, and Greasemonkey already implements the required security precautions to allow you interact with web pages safely.
If you want others to use your script, packaging it as a standalone extension lowers the barrier to entry (on the other case, existing GM users may prefer simpler GM scripts to a separate extension).
So if you can implement what you need to do with a GM script or an ubiquity script, I'd say go with it. At least you can start with it, then convert to an extension when you find something you can't do with GM.
If you need features not supported by Greasemonkey or if you just want to try creating an extension, it is also a viable option.
There is a Greasemonkey-to-firefox-extension "compiler" available, but it isn't up-to-date with the latest GM changes.
However, it does have the basic GM framework for page interaction and security all wrapped up as a standalone extension, ready for you to modify and extend.
Wether to use standalone extension or GM-script depends upon who will be installing this. Will the user-base be willing to install GreaseMonkey, THEN the script? Or is the extension alone enough of an installation barrier?
The GM license does allow for repackaging it with pre-set scripts, I believe, but I can't find back citations for this, at the moment.