Why is EOSJS JsSignatureProvider considered insecure? - javascript

In the eosjs docs (https://developers.eos.io/manuals/eosjs/latest/faq/what-is-a-signature-provider) it's said that JsSignatureProvider is insecure. Why exactly it's insecure?
I'm kinda new and would like to use it in my backend pet project.
I feel like if I'm gonna write my own signature provider I would just reinvent eosjs JsSignatureProvider.

The main reason why JsSignatureProvider is insecure is that the constructor takes the private keys directly to perform the signing (which can be exposed to potential attack vectors, i.e., malicious extensions, etc.).
A more secure way may be to route the signing requests to a secure enclave without exposing the private keys, perform the signing there, and get back the signature.

Related

In clean architecture between entity layer and use-case layer. What is the use-case boundary?

As I understand Entity caries fundamental properties, methods and validations. An User Entity would have name, DoB...email, verified, and what ever deemed 'core' to this project. For instance, one requires phone number while a complete different project would deem phone number as not necessary but mailing address is.
So then, moving out side of Entity layer, we have the Use-Case layer which depends on entity layer. I imagine this use-case as something slightly more flexible than entity. So this layer allows us to write additional logic and calculations while making use of existing entity or multiple entities.
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ? Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways? Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
If its the latter, then it's confusing to me, because it has to be agnostic where it gets/goes, yet these outputs resembles the very medium they will deliver to. for instance, designing a restFUL api, a use case may be...
getUserPosts(userId, limit, offset). which will output a format best for web api consumers (that to me is the application logic right? for a specific application). And it's unlikely that I'll reuse the use-case getUserPost for a different requestor (some terminal interface that runs locally, which wants more detailed response), more or less. So to me i see it shines when the times comes to switch between application-specific framework like between express/koa/httprequest/connect for a restapi for the same application or node.js/bun environment to interact with the same terminal. Rather than all mightly one usecase that criss-cross any kind of application(webservice and terminal simultaneously or any other).
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' ), I suppose the forethought to anticipate different usage and scaling requires experience - unless this is where the open-close principle shines to make it ready to be expandable? Or is it just easier to make a separate use-case like getUserPostsWebServices and getUserPostsForLocal , getPremiumUsersPostsForWebServices - this makes sense to me, because now each use-case has its own constraints, it is not possible for WebServieces to reach anymore data fetch/manipulation than PostsForLocal or getPremiumUsersPostsForWebServices offers. and our reusability of WebServices does not tie to any webserver framework. I suppose this is where I would draw the line for use-case, but I'm inexperienced, and I don't know the answer to this.
I know this has been a regurgitation of my understanding rather than a concrete question, but it still points to the quesiton of what the boundary and defintion of use-case is in clean architecture. Thanks for reading, would anyone chime to clarify anything I said wrong?
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ?
The age of a user is directly derived from the date of birth. I think the way it is calculated is the same between different applications. Thus it is an application agnostic logic and should be placed in the entity.
Defining a User.getAge method does not mean that it must be persisted. The entities in the clean architecture are business object that encapsulate application agnostic business rules.
The properties that are persisted is decided in the repository. Usually you only persist the basic properties, not derived. But if you need to query entities by derived properties they can be persisted too.
Persisting time dependent properties is a bit tricky, since they change as time goes by. E.g. if you persist a user's age and it is 17 at the time you persist it, it might be 18 a few days or eveh hours later. If you have a use case that searches for all users that are 18 to send them an email, you will not find all. Time dependent properties need a kind of heart beat use case that is triggered by a scheduler and just loads (streams) all entities and just persists them again. The repository will then persist the actual value of age and it can be found by other queries.
Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways?
The use case layer usually uses entities. If you use case would be as simple as sum two numbers it must not use entities, but I guess this is a rare case.
Even very small use cases like sums(a, b) can require the use of entities, if there are rules on a and b. This can be very simple rules like a and b must be positive integer values. But even if there are no rules it can make sense to create entities, because if a and be are custom entities you can give them a name to emphasize that they belong to a critial business concept.
Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Usually your application is the only client of the database. If so then your application ensures that only valid entities are stored to the database. Thus it is usually not required to validate them again.
Valid can be context dependent, e.g. if you have an entity named PostDraft it should be clear that a draft doesn't have the same validation rules then a PublishedPost.
Finally a note to the performance concerns. The first rule is measure don't guess. Write a simple test that creates, e.g. 1.000.000 entities, and validates them. Usually a database query and/or the network traffic, or I/O in common, are performance issues and not in memory computation. Of cource you can write code that uses weird loops that mess up performance, but often this is not the case.
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
A use case is is an application dependent business logic. There are different reasons why the clean architecture (and also others like the hexagonal) make them independent of the I/O mechanism. One is that it is independent of frameworks. This makes them easy to test. If a use case would depend on an http controller or better said you implemented the use case in an http controller, e.g. a rest controller, it means that you need to start up an http server, open a socket, write the http request, read the http response, extract the data you need to test it. Even there are frameworks and tools that make such an test easy, these tools must finally start a server and this takes time. Tests that are slow are not executed often, are they? And tests are the basis for refactoring. If you don't have tests or the tests ran slow you do not execute them. If you do not execute them you do not refactor. So the code must rot.
In my opinion the testability is most import and decoupling use cases from any details, like uncle bob names the outer layers, increases the testability of use cases. Use cases are the heart of an application. That's why they should be easy testable and be protected from any dependency to details so that they do not need to be touched in case of a detail change.
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' )
I don't think so. Especially the sideloadingConfig, format like json or csv are not parameters for a use case. These parameters belong to a specific kind of frontend or better said to a specific kind of controllers. A use-case provides the raw business data. It is the responsibility of a controller or better a presenter to format them.

How to guard against scope objects getting changed?

I'm an Angular noob. In an app I have taken over there is an object in the scope that defines the role of the current user (e.g. user.role=REGULAR).
Is there a way to keep a user from opening firebug and changing user.role=ADMIN?
For example, I have seen code that shows a tab based on a value in a scope, but I'm not sure how to keep a user from changing that value (and getting access to the tab). Is there a pattern to deal with this? Does everything access-related need to come directly from a web service/protected remote location?
There is no way to do this. Your design has a fundamental issue; it relies on client side validation.
You can never ever ever ever ever trust anything coming from the client. Anything that you truly want validated or authenticated must be done on the server side, particularly security related matters.
The most important rule is that once it leaves the server and hits the client, its out of your control. Assume its compromised, assume its not trust-worthy, and assume you have to check everything.
In your case, if a user is not an admin don't even provide them with admin options.
Well, you can try to hide the object inside of a closure or use Object.freeze in browsers that support it, however there is no getting around the fact that the code is being sent to and executed on the client. Even if there was a foolproof way of preventing modification ( which there isn't ), the client could have modified the payload in Fiddler or something before it reached the browser.
With that in mind, you cannot trust anything on the client for access/authorization; you must verify this on the server or you'll have security holes/risks.

Sandboxing Node.js modules - can it be done?

I'm learning Node.js (-awesome-), and I'm toying with the idea of using it to create a next-generation MUD (online text-based game). In such games, there are various commands, skills, spells etc. that can be used to kill bad guys as you run around and explore hundreds of rooms/locations. Generally speaking, these features are pretty static - you can't usually create new spells, or build new rooms. I however would like to create a MUD where the code that defines spells and rooms etc. can be edited by users.
That has some obvious security concerns; a malicious user could for example upload some JS that forks the child process 'rm -r /'. I'm not as concerned with protecting the internals of the game (I'm securing as much as possible, but there's only so much you can do in a language where everything is public); I could always track code changes wiki-style, and punish users who e.g. crash the server, or boost their power over 9000, etc. But I'd like to solidly protect the server's OS.
I've looked into other SO answers to similar questions, and most people suggest running a sandboxed version of Node. This won't work in my situation (at least not well), because I need the user-defined JS to interact with the MUD's engine, which itself needs to interact with the filesystem, system commands, sensitive core modules, etc. etc. Hypothetically all of those transactions could perhaps be JSON-encoded in the engine, sent to the sandboxed process, processed, and returned to the engine via JSON, but that is an expensive endeavour if every single call to get a player's hit points needs to be passed to another process. Not to mention it's synchronous, which I would rather avoid.
So I'm wondering if there's a way to "sandbox" a single Node module. My thought is that such a sandbox would need to simply disable the 'require' function, and all would be bliss. So, since I couldn't find anything on Google/SO, I figured I'd pose the question myself.
Okay, so I thought about it some more today, and I think I have a basic strategy:
var require = function(module) {
throw "Uh-oh, untrusted code tried to load module '" + module + "'";
}
var module = null;
// use similar strategy for anything else susceptible
var loadUntrusted = function() {
eval(code);
}
Essentially, we just use variables in a local scope to hide the Node API from eval'ed code, and run the code. Another point of vulnerability would be objects from the Node API that are passed into untrusted code. If e.g. a buffer was passed to an untrusted object/function, that object/function could work its way up the prototype chain, and replace a key buffer function with its own malicious version. That would make all buffers used for e.g. File IO, or piping system commands, etc., vulnerable to injection.
So, if I'm going to succeed in this, I'll need to partition untrusted objects into their own world - the outside world can call methods on it, but it cannot call methods on the outside world. Anyone can of course feel free to please tell me of any further security vulnerabilities they can think of regarding this strategy.

RabbitMQ + Web Stomp and security

RabbitMQ + Web Stomp is awesome. However, I have some topics I would like secure as read-only or write-only.
It seems the only mechanism to secure these are with rabbitmqctl. I can create a vhost, a user and then apply some permissions. However, this is where then Stomp and Rabbit implementation starts to break down.
topics take form: /topic/blah in stomp, which routes to "amq.topic" in Rabbit with a routing key "blah". It would seem there is no way to set permissions for the routing key. Seems:
rabbitmqctl set_permissions -p vhost user ".*" ".*" "^amq\.topic"
is the best I can do, which is still "ALL" topics. I've looked into exchanges as well, but there is no way in javascript to define these on the fly.
Am I missing something here?
Reference: http://www.rabbitmq.com/blog/2012/05/14/introducing-rabbitmq-web-stomp/
Try this https://github.com/simonmacmullen/rabbitmq-auth-backend-http
It's much more flexible.
Basically it's small auth plugin for rabbit that delegates ACL decisions to a script over http (of which you have total control) which only has to reply with "allow" or "deny"
Yes, with RabbitMQ-WebStomp you're pretty much limited to normal RabbitMQ permissions set. It's not ideal, but you should be able to get basic permission setup right. Take a look at RabbitMQ docs:
http://www.rabbitmq.com/access-control.html
Quickly looking at the stomp docs:
http://www.rabbitmq.com/stomp.html
yes, you can't set up permissions for a particular routing key. Maybe you should use the 'exchange' semantics, plus bind an exchange with a queue explicitly (ie: don't use topics):
/exchange/exchange_name[/routing_key].
Please, do ask concrete questions about RMQ permissions on rabbitmq-discuss mailing list. People there are really helpful.
Unfortunately, RMQ permission set is not enough for some more complex scenarios. In this case you may want to:
Use STOMP only to read data, and publish messages only using some external AJAX interface that can speak directly to rabbit internally.
or, don't use web-stomp plugin and write a simple bridge between SockJS and RabbitMQ manually. This gives you more flexibility but requires more work.

Javascript eval (and friends)

Some claim eval is evil.
Any regular HTML page may look like:
<script src="some-trendy-js-library.js"></script>
</body>
</html>
That is, assuming the person doing this knows his job and leaves javascript to load at the end of the page.
Here, we are basically loading a script file into the web browser. Some people have gone deeper and use this as a way to communicate with a 3rd party server...
<script src="//foo.com/bar.js"></script>
At this point, it's been found important to actually load those scripts conditionally at runtime, for whatever reason.
What is my point? While the mechanics differ, we're doing the same thing...executing a piece of plain text as code - aka eval().
Now that I've made my point clear, here goes the question...
Given certain conditions, such as an AJAX request, or (more interestingly) a websocket connection, what is the best way to execute a response from the server?
Here's a couple to get you thinking...
eval() the server's output. (did that guy over there just faint?)
run a named function returned by the server: var resp = sock.msg; myObj[resp]();
build my own parser to figure out what the server is trying to tell me without messing with the javascript directly.
Given certain conditions, such as an AJAX request, or (more interestingly) a websocket connection, what is the best way to execute a response from the server?
The main criticism of eval when used to parse message results is that it is overkill -- you are using a sledgehammer to swat a fly with all the extra risk that comes from overpowered tools -- they can bounce back and hit you.
Let's break the kinds of responses into a few different categories:
Static javascript loaded on demand
A dynamic response from a trusted source on a secure channel that includes no content specified by untrusted parties.
A dynamic response from mixed sources (maybe mostly trusted but includes encoded strings specified by untrusted parties) that is mostly data
Side-effects based on data
For (1), there is no difference between XHR+eval and <script src>, but XHR+eval has few advantages.
For (2), little difference. If you can unpack the response using JSON.parse you are likely to run into fewer problems, but eval's extra authority is less likely to be abused with data from a trusted source than otherwise so not a big deal if you've got a good positive reason for eval.
For (3), there is a big difference. eval's extra-abusable authority is likely to bite you even if you're very careful. This is brittle security-wise. Don't do it.
For (4), it's best if you can separate it into a data problem and a code problem. JSONP allows this if you can validate the result before execution. Parse the data using JSON.parse or something else with little abusable authority, so a function you wrote and approved for external use does the side-effects. This minimizes the excess abusable authority. Naive eval is dangerous here.
"Evil" does not mean "forbidden". Sometimes, there are perfectly good reasons to use so-called "evil" features. They are just called "evil" since they can be, and often are, misused.
In your case, the client-side script is only allowed to make requests to "its own" server. This is the same server the original JavaScript came from, so the dynamic response is as trusted as the original code. A perfectly valid scenario for eval().
If you're fetching code from a domain you don't control, then handing over the code "raw" to the JavaScript interpreter always means you have to completely trust that domain, or else that you have to not care whether malicious code corrupts your own pages.
If you control the domain, then do whatever you want.
The server should provide you with data, not code. You should have the server respond with JSON data that your JS code can act accordingly. Having the server send names of functions to be called with myObj[resp](); is still tightly coupling the server logic with client logic.
It's hard to provide more suggestions without some example code.
Have your server return JSON, and interpret that JSON on the client. The client will figure out what to do with the JSON, just as the server figures out what to do with requests received by the client.
If your server starts returning executable code, you have a problem. NOT because something "bad" is going to happen (although it might), but because your server is not responsible for knowing what the client is or is not suppose to do.
That's like sending code to the server and expected the server to execute it. Unless you've got a REALLY good reason (such as an in-browser IDE), that's a bad idea.
Use eval as much as you want, just make sure you're seperating responsibilites.
Edit:
I see the flaw in this logic. The server is obviously telling the client what to do, simply because it supplied the scripts that the client executes. However, my point is that the server-side code should not be generating scripts on the fly. The server should be orchestrating, not producing.

Categories