I am using this query to update a status value
public function updateStatus(Request $request)
{
$customer = Customer::findOrFail($request->user_id);
$customer->status = $request->status;
$customer->new_customer_status = 1;
$customer->save();
return response()->json(['message' => 'User status updated successfully.']);
}
I want that if status == 1 then after one week $customer->new_customer_status should automatically becomes NULL
How can I schedule time or days based query that run automatically after one week or at a given time?
make a command with php artisan make:command UpdateUserNotNew
inside that class app/Console/Commands/UpdateUserNotNew.php set the command name:
protected $signature = 'cron:update-user-not-new';
under handle method:
public function handle()
{
Customer::whereDate('created_at', '>', now()->addDays(7)->toDateTimeString())
->update([
'new_customer_status' => null
]);
}
inside app/Console/Kernel.php under schedule() method add this line:
protected function schedule(Schedule $schedule)
{
$schedule->command('cron:update-user-not-new')->daily(); //<--- this line
}
In order all of that to work you must have the crontab enabled on your server, that info is in Laravel docs: https://laravel.com/docs/7.x/scheduling#introduction
You have to put your query inside a queued job. Then, you can schedule this job to run after one week.
This is the example given in the documentation, right here: https://laravel.com/docs/7.x/queues#delayed-dispatching
ProcessPodcast::dispatch($podcast)->delay(now()->addMinutes(10));
Where ProcessPodcast is the job class. I'll let you dive further in the documentation to see how to create a job but it's really easy and straightforward. Of course, in your case, you should probably do now()->addWeek() instead of now()->addMinutes(10).
Related
I work in laravel and trying to get the response from the page. This is the controller in question:
public function counter($id)
{
$toSearch = $id * 10000;
$latest = User::where('counter', '>=', $toSearch)->get()->last();
$res = $latest['counter'] % $toSearch;
return $res;
}
As you can see the result that been returned by the controller is a single record and I am desperate to getting that record in to the Java Script File in the separate file than my blade view. I don't want to access the data base from JS and just trying to get that single record.
This is the function that is responsible for returning the result:
function counter(id) {
let data;
// Please Help Me Here
return data;
}
The returned value from this function will be used for another function.
The way the algorithm work is calling the function counter form the JS file which the result will be handled by the controller by somewhat using fetch or anything.
Note: the result must be in integer <- perhaps there is another thing to do to convert it to integer / number.
I see 2 options of solving your problem:
1. Send variable to blade template and then with js retrieve it.
2. Make ajax request to retrieve the data, you need to create separate route, so the route only return result via JSON.
Both variants are having their cons and pros. For someone its easier second, forelse first. I'll try to write both down below:
1. Sending data to Blade - for this one you need to know where that counter will be used, if you trying to use it globally, then you might attach to body or footer/header as like:
<body data-counter="{{App\Http\Models\User::calculateCounter($user_id)}}">
its just example, your real code might be different. For above code you need to create public static function in User model and put your code there in Model. Also you have to pass $user_id variable and recieve it in User model. Or You may use controller file to acheive, though result would be the same ($user_id needs to send).
Then, in your js file's function:
function counter() {
return parseInt(document.body.dataset.counter);
}
2. Ajax request - you may have async await issue, if yes, then try to learn more about "returning response async XMLHTTPRequest". It might be tricky.
In your js function:
function counter(id){
const xhttp = new XMLHttpRequest();
xhttp.open("GET", `/getCounterOfUser/${id}`);
xhttp.addEventListener('load', function (event) {
if (event.target.responseText && event.target.status == 200) {
return parseInt(JSON.parse(event.target.responseText));
} else {
return 0;
}
});
xhttp.addEventListener(' error', function (event) {
// Define what happens in case of error
});
xhttp.send();
}
And you need to create new route:
Route::get('/getCounterOfUser/{id}', [App\Http\Controllers\Your_Controller::class, 'counter'])
Also you need to return JSON in your controller's function:
public function counter($id)
{
$toSearch = $id * 10000;
$latest = User::where('counter', '>=', $toSearch)->get()->last();
$res = $latest->counter % $toSearch;
return response()->json($res);
}
I have a ReactJS application that gets data from a remote API and shows it. The data returned contains an Activity ID, which corresponds to an Activity name. The database for Activities is not easily accessible for me though. The same case happens with Account and Location for example.
My approach was to create a JS file for each of Account, Activity, Location, add one method in each that takes the ID, a big switch inside that matches ID with the list of IDs inside it and returns required name.
This approach worked fine for Account and Location, which had 800 and 170 cases respectively. When it came to the Activity that has 11000 cases, npm run build is now taking ages to run (I terminated after more than 15 mins passed).
My question is: does the time taken by npm run build correspond to the file size or the syntax of the code inside? Will this approach cause problems if I let npm run build take its time? Or is there a better and faster way to do this, like map for example?
Edit: This is an example for the data:
Account ID: 113300512
Account Name: 113300512:account1
Sample:
switch(id) {
case "170501010001":
return "170501010001: Text in arabic"
case "170501010002":
return "170501010002: Text in arabic"
}
This could be one of the solution.
Create a class with methods, Stote - to store, find - to find.
I will not recommend to use switch case.
AccountRepository.js
'use strict';
class AccountRepository {
constructor() {
this.accountReadings = new Map();
}
store(id, value) {
this.accountReadings.set(id, value);
}
find(id) {
if (this.accountReadings.has(id)) {
return this.accountReadings.get(id);
} else {
return [];
}
}
}
module.exports = new AccountRepository();
And to use.
import AccountRepository from './AccountRepository';
AccountRepository.store(...);
AccountRepository.find(...);
I need to fetch sub-set of documents in Firestore collection modified after some moment. I tried going theses ways:
It seems that native filtering can work only with some real fields in stored document - i.e. nevertheless Firestore API internally has DocumentSnapshot.getUpdateTime() I cannot use this information in my query.
I tried adding my _lastModifiedAt 'service field' via server-side firestore cloud function, but ... that updating of _lastModifiedAt causes recursive invocation of the onWrite() function. I.e. is does also not work as needed (recursion finally stops with Error: quota exceeded (Function invocations : per 100 seconds)).
Are there other ideas how to filter collection by 'lastModifiedTime'?
Here is my 'cloud function' for reference
It would work if I could identify who is modifying the document, i.e. ignore own updates of _lastModified field, but I see no way to check for this
_lastModifiedBy is set to null because of current inability of Firestore to provide auth information (see here)
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
console.log(event.data.data());
var lastModified = {
_lastModifiedBy: null,
_lastModifiedAt: now
}
return event.data.ref.set(lastModified, {merge: true});
});
I've found the way to prevent recursion while updating '_lastModifiedAt'.
Note: this will not work reliably if client can also update '_lastModifiedAt'. It does not matter much in my environment, but in general case I think writing to '_lastModifiedAt' should be allowed only to service accounts.
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
var doc = event.data.data();
var prevDoc = event.data.previous.data();
if( doc && prevDoc && (doc._lastModifiedAt != prevDoc._lastModifiedAt) )
// this is my own change
return 0;
var lastModified = getLastModified(event);
return event.data.ref.set(lastModified, {merge: true});
});
Update: Warning - updating lastModified in onWrite() event causes infinite recursion when trying to delete all documents in Firebase console. This happens because onWrite() is also triggered for delete and writing lastModified into deleted document actually resurrects it. That document propagates back into console and is tried to be deleted once again, indefinitely (until WEB page is closed).
To fix that issue above mentioned code has to be specified individually for onCreate() and onUpdate().
How about letting the client write the timestamp with FieldValue.serverTimestamp() and then validate that the value written is equal to time in security rules?
Also see Mike's answer here for an example: Firestore Security Rules: If timestamp (FieldValue.serverTimestamp) equals now
You could try the following function, which will not update the _lastModifiedAt if it has been marked as modified within the last 5 seconds. This should ensure that this function only runs once, per update (as long as you don't update more than once in 5 seconds).
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
console.log(event.data.data());
if ((Date.now() - 5000) < event.data.data()._lastModifiedAt) {return null};
var lastModified = {
_lastModifiedBy: null,
_lastModifiedAt: now
}
return event.data.ref.set(lastModified, {merge: true});
});
I´m starting with IndexedDB and to not reinvent the wheel I´m using Dexie.js https://github.com/dfahlander/Dexie.js
I created the database, I added data and now I´m creating a generic function that get a CSV and populate the database in anothers tables.
So, more or less my code is
// Creation and populate database and first table
var db = new Dexie("database");
db.version(1).stores({table1: '++id, name'});
db.table1.add({name: 'hello'});
Until here all is OK
Now, in success of ajax request
db.close();
db.version(2).stores({table2: '++id, name'});
db.open();
db.table2.add({name: 'hello'});
First time this code run everything is OK, but next time I get this error
VersionError The operation failed because the stored database is a
higher version than the version requested.
If I delete database and run code again only first time works OK.
Any idea? I don´t like too much IndexedDB version way, it´s looks frustrating and I don't get lot of help in the Net
Thanks.
Edit:
I discover the ¿problem/bug/procedure?. If I don´t add nothing before any version modification I haven't this issue, but does somebody know if is this the normal procedure?
So.. if this is the procedure I can't add any table dinamycally with a generic method. First all declarations and then add values. Any possibility to add a table after add values?
Edit again... I just realized that I could create another database. I'll post results. But any information about this issue is welcome :)
Edit again... I created dinamycally another database and everybody is happy!!
That is because the second time the code runs, your database is on version 2, but your main code still tries to open it at version 1.
If not knowing the current version installed, try opening dexie in dynamic mode. This is done by not specifying any version:
var db = new Dexie('database');
db.open().then(function (db) {
console.log("Database is at version: " + db.verno);
db.tables.forEach(function (table) {
console.log("Found a table with name: " + table.name);
});
});
And to dynamically add a new table:
function addTable (tableName, tableSchema) {
var currentVersion = db.verno;
db.close();
var newSchema = {};
newSchema[tableName] = tableSchema;
// Now use statically opening to add table:
var upgraderDB = new Dexie('database');
upgraderDB.version(currentVersion + 1).stores(newSchema);
return upgraderDB.open().then(function() {
upgraderDB.close();
return db.open(); // Open the dynamic Dexie again.
});
}
The latter function returns a promise to wait until it's done before using the new table.
If your app resides in several browsers, the other windows will get their db connection closed as well so they can never trust the db instance to be open at any time. You might want to listen for db.on('versionchange') (https://github.com/dfahlander/Dexie.js/wiki/Dexie.on.versionchange) to override the default behavior for that:
db.on("versionchange", function() {
db.close(); // Allow other page to upgrade schema.
db.open() // Reopen the db again.
.then(()=> {
// New table can be accessed from now on.
}).catch(err => {
// Failed to open. Log or show!
});
return false; // Tell Dexie's default implementation not to run.
};
I'm having a problem in the settings of Breeze's JsonMediaTypeFormatter.
What I would do is that the date of json sent and received by WebAPI
always work in UTC.
According to this document, it would be possible by setting the property DateTimeZoneHandling to DateTimeZoneHandling.Utc for the JsonSerializerSettings
However that did not work.
Investigating this source code, I realized that what might be influencing this behavior was the hack that was done for this other issue.
By removing all this code bellow, everything works ok.
//jsonSerializerSettings.Converters.Add(new IsoDateTimeConverter
//{
// DateTimeFormat = "yyyy-MM-dd\\THH:mm:ss.fffK"
//});
How can I handle this situation without having to remove the Hack?
EDIT 1
My first attempt to set was as follows:
var jsonFormatter = Breeze.WebApi.JsonFormatter.Create();
jsonFormatter.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Utc;
jsonFormatter.SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json"));
jsonFormatter.SupportedEncodings.Add(new UTF8Encoding(false, true));
GlobalConfiguration.Configuration.Formatters.Insert(
0, jsonFormatter);
But this did not work, the returned date was not in UTC.
EDIT 2
First, I've updated the Breeze lib to 0.80.3 version.
In my App_Start folder I have this BreezeWebApiConfig.cs file:
[assembly: WebActivator.PreApplicationStartMethod(
typeof(Partner.App_Start.BreezeWebApiConfig), "RegisterBreezePreStart")]
namespace Partner.App_Start
{
public static class BreezeWebApiConfig
{
public static void RegisterBreezePreStart()
{
GlobalConfiguration.Configuration.Routes.MapHttpRoute(
name: "BreezeApi",
routeTemplate: "api/{controller}/{action}"
);
var jsonFormatter = Breeze.WebApi.JsonFormatter.Create();
jsonFormatter.SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json"));
jsonFormatter.SupportedEncodings.Add(new UTF8Encoding(false, true));
GlobalConfiguration.Configuration.Formatters.Insert(
0, jsonFormatter);
// Apply query parameters, expressed as OData URI query strings,
// to results of Web API controller methods that return IQueryable<T>
GlobalConfiguration.Configuration.Filters.Add(
new Breeze.WebApi.ODataActionFilter());
}
}
}
Second, I've created a CustomBreezeConfig.cs class (with the code described below by Jay) in a folder that I called BreezeConfig, but this new attempt did not work.
Regards,
Bernardo Pacheco
As of breeze v 0.80.3, we've added the capability to customize the json serializer settings that breeze uses for both queries and saves. It involves adding a server side class that is a subclass of the new Breeze.WebApi.BreezeConfig class. This subclass will look something like:
public class CustomBreezeConfig : Breeze.WebApi.BreezeConfig {
/// <summary>
/// Overriden to create a specialized JsonSerializer implementation that uses UTC date time zone handling.
/// </summary>
protected override JsonSerializerSettings CreateJsonSerializerSettings() {
var baseSettings = base.CreateJsonSerializerSettings();
baseSettings.DateTimeZoneHandling = DateTimeZoneHandling.Utc;
return baseSettings;
}
}
Any instance of a subclass of Breeze.WebApi.BreezeConfig that appears in the server side project will now be automatically discovered and used to customize breeze's configuration.
Please let us know if this helps ( or doesn't ).
Please try breeze v 0.80.5 along with the corresponding release notes. Hopefully, 'time's should now roundtrip properly.
When you say adding DateTimeZoneHandling didn't work, how did you try setting it?
You might try just adding this line immediately above the 'Converters.Add' call (from above) in the source (without removing the 'hack'), and let me know if it works.
jsonSerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Utc;
I agree that it's still clumsy because it means that you have to modify the breeze source. So if it does work, we will try to come up with some way to allow you to set this from outside the formatter. Please let us know.
I solved the utc problem with this hack, which still smells.
in app.vm.run.js
app.vm.run = (function ($, ko, dataservice, router) {
var currentRunId = ko.observable(),
// run will be an entity
run = ko.observable(),
...
save = function () {
this.run().lastUpdate(makeDatetimeUTC(moment().toDate()));
this.run().runStart(makeDatetimeUTC(this.run().runStart()));
this.run().runEnd(makeDatetimeUTC(this.run().runEnd()));
dataservice.saveChanges();
// the test r === run() succeeds because the local run is a
// ko.observable which is bound to the run in the cache
var r = dataservice.getRunById(currentRunId());
},
...
})($, ko, app.dataservice, app.router);
in myScripts.js
// Here is a real pain in the neck.
// For some reason, when the entity is saved, it shows up on the server as UTC datetime
// instead of local. Moment parses everything as local by default, so the formatDate function
// used to get a display value needs to be converted to utc before it is returned to the server.
//
// This function takes the value of the dependentObservable in the entity
// and converts it to a string which can be stored back into the entity before sending
// it to the server.
//
// The reason I need to do that is so that it displays properly after the save.
// The date seems to be handled properly by the server.
var makeDatetimeUTC = function(localDatetime) {
var datestring = formatDate(localDatetime);
var utc = moment.utc(datestring);
return formatDate(utc);
};
var formatDate = function(dateToFormat) {
if (dateToFormat === null ||dateToFormat === undefined || dateToFormat.length === 0)
return "";
// intermediate variable is not needed, but is good for debugging
var formattedDate = moment(dateToFormat).format('MM/DD/YYYY hh:mm A');
return formattedDate;
},
formatObservableDate = function(observable) {
if (ko.isObservable(observable))
return observable(formatDate(observable()));
else
throw new Error("formatObservableDate expected a ko.Observable ");
};