Does `web3.eth.getLogs` find reverted transactions? - javascript

I am trying to ascertain exactly how block explorers yield reverted transactions from an API call.
Does it use getLogs()? Or is there another standard for finding specific types of transactions like this? If so, what differs in the search/on chain logging activity?
I have used getLogs() however haven’t explored the intricacies of what makes transactions lok different depending on their type to API calls

Related

In clean architecture between entity layer and use-case layer. What is the use-case boundary?

As I understand Entity caries fundamental properties, methods and validations. An User Entity would have name, DoB...email, verified, and what ever deemed 'core' to this project. For instance, one requires phone number while a complete different project would deem phone number as not necessary but mailing address is.
So then, moving out side of Entity layer, we have the Use-Case layer which depends on entity layer. I imagine this use-case as something slightly more flexible than entity. So this layer allows us to write additional logic and calculations while making use of existing entity or multiple entities.
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ? Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways? Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
If its the latter, then it's confusing to me, because it has to be agnostic where it gets/goes, yet these outputs resembles the very medium they will deliver to. for instance, designing a restFUL api, a use case may be...
getUserPosts(userId, limit, offset). which will output a format best for web api consumers (that to me is the application logic right? for a specific application). And it's unlikely that I'll reuse the use-case getUserPost for a different requestor (some terminal interface that runs locally, which wants more detailed response), more or less. So to me i see it shines when the times comes to switch between application-specific framework like between express/koa/httprequest/connect for a restapi for the same application or node.js/bun environment to interact with the same terminal. Rather than all mightly one usecase that criss-cross any kind of application(webservice and terminal simultaneously or any other).
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' ), I suppose the forethought to anticipate different usage and scaling requires experience - unless this is where the open-close principle shines to make it ready to be expandable? Or is it just easier to make a separate use-case like getUserPostsWebServices and getUserPostsForLocal , getPremiumUsersPostsForWebServices - this makes sense to me, because now each use-case has its own constraints, it is not possible for WebServieces to reach anymore data fetch/manipulation than PostsForLocal or getPremiumUsersPostsForWebServices offers. and our reusability of WebServices does not tie to any webserver framework. I suppose this is where I would draw the line for use-case, but I'm inexperienced, and I don't know the answer to this.
I know this has been a regurgitation of my understanding rather than a concrete question, but it still points to the quesiton of what the boundary and defintion of use-case is in clean architecture. Thanks for reading, would anyone chime to clarify anything I said wrong?
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ?
The age of a user is directly derived from the date of birth. I think the way it is calculated is the same between different applications. Thus it is an application agnostic logic and should be placed in the entity.
Defining a User.getAge method does not mean that it must be persisted. The entities in the clean architecture are business object that encapsulate application agnostic business rules.
The properties that are persisted is decided in the repository. Usually you only persist the basic properties, not derived. But if you need to query entities by derived properties they can be persisted too.
Persisting time dependent properties is a bit tricky, since they change as time goes by. E.g. if you persist a user's age and it is 17 at the time you persist it, it might be 18 a few days or eveh hours later. If you have a use case that searches for all users that are 18 to send them an email, you will not find all. Time dependent properties need a kind of heart beat use case that is triggered by a scheduler and just loads (streams) all entities and just persists them again. The repository will then persist the actual value of age and it can be found by other queries.
Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways?
The use case layer usually uses entities. If you use case would be as simple as sum two numbers it must not use entities, but I guess this is a rare case.
Even very small use cases like sums(a, b) can require the use of entities, if there are rules on a and b. This can be very simple rules like a and b must be positive integer values. But even if there are no rules it can make sense to create entities, because if a and be are custom entities you can give them a name to emphasize that they belong to a critial business concept.
Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Usually your application is the only client of the database. If so then your application ensures that only valid entities are stored to the database. Thus it is usually not required to validate them again.
Valid can be context dependent, e.g. if you have an entity named PostDraft it should be clear that a draft doesn't have the same validation rules then a PublishedPost.
Finally a note to the performance concerns. The first rule is measure don't guess. Write a simple test that creates, e.g. 1.000.000 entities, and validates them. Usually a database query and/or the network traffic, or I/O in common, are performance issues and not in memory computation. Of cource you can write code that uses weird loops that mess up performance, but often this is not the case.
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
A use case is is an application dependent business logic. There are different reasons why the clean architecture (and also others like the hexagonal) make them independent of the I/O mechanism. One is that it is independent of frameworks. This makes them easy to test. If a use case would depend on an http controller or better said you implemented the use case in an http controller, e.g. a rest controller, it means that you need to start up an http server, open a socket, write the http request, read the http response, extract the data you need to test it. Even there are frameworks and tools that make such an test easy, these tools must finally start a server and this takes time. Tests that are slow are not executed often, are they? And tests are the basis for refactoring. If you don't have tests or the tests ran slow you do not execute them. If you do not execute them you do not refactor. So the code must rot.
In my opinion the testability is most import and decoupling use cases from any details, like uncle bob names the outer layers, increases the testability of use cases. Use cases are the heart of an application. That's why they should be easy testable and be protected from any dependency to details so that they do not need to be touched in case of a detail change.
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' )
I don't think so. Especially the sideloadingConfig, format like json or csv are not parameters for a use case. These parameters belong to a specific kind of frontend or better said to a specific kind of controllers. A use-case provides the raw business data. It is the responsibility of a controller or better a presenter to format them.

Firebase Realtime Database document ordering

I am listening for new Firebase Realtime Database documents with code something like this:
firebase.database().ref(path)
.orderByChild('timestamp')
.on('child_added', snap => {
...
});
where timestamp is set on the server with firebase.database.ServerValue.TIMESTAMP. I would like to have documents always handled in timestamp order, but I am aware that documents I add locally may arrive in the above code out of order.
I can check for and fix mis-ordered arrivals but I'd prefer not to if there is some way to have this not happen. I know about this answer (and answers that link to it) but I believe that applies to an earlier API without ordering methods like orderByChild.
I believe that I should be able to get timestamp order if I always add documents using a transaction and pass false in the applyLocally argument. I am wondering if it also works to add documents from a separate Javascript context on the same client (e.g. from a Web Worker) without a transaction.
Will either or both of these approaches guarantee timestamp ordering? Is there any other way to achieve this? Among approaches that work, is one clearly superior or are there trade-offs among them?
The local estimate/latency compensation event is only fired on the client that performs the write operation. So if you perform a write operation in a different context, the original context will only see the operation when it comes from the server.
You might even be able to accomplish this by using two FirebaseApp instances, although I couldn't get that working in a quick test here myself.

Can Firebase transform data server-side before writing it?

According to this documentation, and this accompanying example, Firebase tends to follow the following flow when transforming newly written data:
Client writes data to Firebase, which is immediately accepted
The supplied Cloud Function is triggered, which transforms the data (in the example above, it removes swear words)
The transformed data is written again, overwriting the original data written in step 1
Maybe I'm missing something here, but this flow seems to present some problems. For example, if there is an error in step 2 above, and step 3 is never fired, the un-transformed data will just linger in the database. It seems like it would be better to transform the data as soon as it hits the server, but before writing. This would be followed by a single write operation, which will leave no loose artifacts behind if it fails. Is there any way in the current Firebase + Google Cloud Functions stack to add these types of pre-write data transforms?
My (tentative and weird) solution so far is to have a "shadow" /_temp/{endpoint} area in my Firebase db, so that when I want to write to /{endpoint}, I write there instead, which then triggers the relevant cloud function to do the transformation before writing to /{endpoint}. This at least prevents potentially incomplete data from leaking into my database, but it seems very inelegant and "hacky."
I'd also be interested to know if there are any server-side methods for transforming data before responding to read requests.
There is no hook in the Firebase Database (neither through Cloud Functions nor elsewhere) that allows you to modify values before they're written to the database. The temporary queue is the idiomatic way to address this use-case. It functions pretty similar to a moderator queue in most forum software.
You could use a HTTP Function to create an endpoint that your code calls and then perform the transformation there. You could use a similar pattern for reading data, although you'd have to rebuild the realtime synchronization capabilities of Firebase yourself.

Node.js, MongoDB, and Concurrency

I'm working on a game prototype and worried about the following case: Browser does AJAX to Node.JS, which has to do several MongoDB operations using async.series.
What prevents multiple requests at the same time causing the database issues? New events (i.e. db operations) seem like they could be run out of order or in between the async.series steps.
In other words, what happens if a user does AJAX calls very quickly, before the prior ones have finished their async.series. Hopefully that makes sense.
If this is indeed an issue, what is the proper way to handle it?
First and foremost, #fmodos's comment should be completely disregarded. It is wrong on many levels but most simply you could have any number of nodes running (say on Heroku) and there is no guarantee that subsequent requests will hit the same node.
Now, I'm going to answer your question by asking more questions. (You really didn't give me a choice here)
What are these operations doing? Inserting documents? Updating existing documents? Removing documents? This is very important because if all you're doing is simply inserting documents then why does it matter if one finishes for before the other? If you're updating documents then you should NOT be issuing a find, grabbing a ref to the object, and then calling save. (I'm making the assumption you're using mongoose, if you're not, I would) Instead what you should be doing is using built in mongo functions like $inc which properly handle concurrent requests.
http://docs.mongodb.org/manual/reference/operator/update/inc/
Does that help at all? If not, please let me know and I will give it another shot.
Mongo has database wide read/write locks. It gives preference to writes of the same collection first then fulfills reads. So, if by chance, you have Bill writing to the db and Joe is reading at the same time, Bill's write will execute first while Joe waits until the write is complete and then he is given all the data (including Bill's).

how do you folks handle complex state situations where order of operations is important?

I'm getting in to a situation where I have several interacting widgets (on a web UI), all of whom can be in multiple different states, and whose behavior depends on others the others. I'm running in to situations where, for example, a set of data gets sorted twice, or the data gets displayed before it's sorted, rather than the other way around. It's a little bit of a wack-a-mole problem, where I think I've simplified things and gotten it working, only to find out I've broken things somewhere else.
I have functions that do things like:
widgetAFunction
load data into widget B
tell widget B to sort the data
tell widget B to display the data
My love of code reuse makes me want to do something like write a loadData function in widget A that goes something like this:
widgetBLoadDataFunction
update data
sort the data
refresh the view
So that all widgetA has to do is call one function on widgetB. But then there are cases where I just want to sort the data, without updating the data, so I write:
widgetBSortFunction
sort the data
refresh the view
And then maybe I want a filter function
widgetBFilterFunction
filter the data
refresh the view
And maybe I want to be update the data but not sort it, so I have
widgetBNoSortLoadDataFunction
update data
refresh the view
It doesn't seem that complex, but I wind up with these really long, very brittle chains of function calls, or a bunch of very similar calls. As Martin Fowler would say, the code is getting a little smelly.
So, what other alternatives do I have? I did something on a recent project where I did a state machine kind of thing, where I registered a bunch of functions with a set of conditions, or states which would trigger their execution. That worked somewhat well, and I'm thinking that approach might be good to use again.
Does anyone know what I'm talking about here, and even better, can anyone point me toward some patterns that will help me get my head around this better?
What you need is a finite state machine implementation. Basically every finite state machine needs:
Events that the program responds to
States where the program waits between events
Transitions between states in response to events
Actions taken during transitions
Variables that hold values needed by actions between events
A good article from IBM teachs you a way of implementing it by means of Javascript.
Edit: Here is a FSM builder, so you don't have to build your own.
Fernando already mentioned FSMs, and gave good info and links. :)
In addition, I'll add that your classes should already incorporate enough state so that you're not worried about sorting twice, etc. I.e., widgetB.sort() should check if it's been sorted since last update and just return if so. There's practically no downside to doing this, and it can improve performance (and also guard consistency).

Categories