Hello
I want to import a .glb file but when I call it in JS I get an error which tell me that the file isn't found whereas the file was copied.
Here's my files:
wwwroot
/-Assets
/---Element
/---Objects3D
/-----hub.glb
Here, the error:
Failed to load resource: the server responded with a status of 404 () :44353/Assets/Objects3D/hub.glb:1
Have you an idea ?
Thank you !
Ok so with some little investigation I was able to solve my issue, this might help you solve your problem too.
In the server side when setting the UseStaticFiles you should add some options to allow UnkownType files to be accessed.
There are two ways you can solve this. The first is especifying the type of the file that you wish to provide that way. In this case you should do it this way on you Startup.cs file, Configure function:
StaticFileOptions options = new StaticFileOptions { ContentTypeProvider = new FileExtensionContentTypeProvider() };
((FileExtensionContentTypeProvider)options.ContentTypeProvider).Mappings.Add(new KeyValuePair<string, string>(".glb", "model/gltf-buffer"));
app.UseStaticFiles(options);
The second option (and more appropriate if you have more than 1 Unkown file type) is to allow any UnkownTypeFile using a flag on the same StaticFileOptions at the same Configure function:
StaticFileOptions options = new StaticFileOptions { ContentTypeProvider = new FileExtensionContentTypeProvider() };
options.ServeUnknownFileTypes = true;
app.UseStaticFiles(options);
Also, make sure you are able to access the subdomain directories on you server. If you are not, then you should also include this in your ConfigureServices function in the Startup.cs file:
services.AddDirectoryBrowser();
As well as the line bellow on your Configure function at the same file:
app.UseFileServer(enableDirectoryBrowsing: true);
Note: This solution is for a server using .Net Core 3.1. If you are using a different version or .Net Framework there might be a different fix.
Background
I am trying to trace in a front end app.
I am not be able to use #opentelemetry/exporter-jaeger since I believe it is for Node.js back end app only.
So I am trying to use #opentelemetry/exporter-collector.
1. Succeed to print in browser console
First I tried to print the trace data in the browser console. And
the code below succeed printing the trace data.
import { CollectorTraceExporter } from '#opentelemetry/exporter-collector';
import { DocumentLoad } from '#opentelemetry/plugin-document-load';
import { SimpleSpanProcessor, ConsoleSpanExporter } from '#opentelemetry/tracing';
import { WebTracerProvider } from '#opentelemetry/web';
const provider = new WebTracerProvider({ plugins: [new DocumentLoad()] });
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.register();
2. Failed to forward to Jaeger
Now I want to forward them to Jaeger.
I am running Jaeger all-in-one by
docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:1.18
Based on the Jaeger port document, I might be able to use these two ports (if other ports work, that will be great too!):
14250 HTTP collector accept model.proto
9411 HTTP collector Zipkin compatible endpoint (optional)
Then I further found more info about this port:
Zipkin Formats (stable)
Jaeger Collector can also accept spans in several Zipkin data format,
namely JSON v1/v2 and Thrift. The Collector needs to be configured to
enable Zipkin HTTP server, e.g. on port 9411 used by Zipkin
collectors. The server enables two endpoints that expect POST
requests:
/api/v1/spans for submitting spans in Zipkin JSON v1 or Zipkin Thrift format.
/api/v2/spans for submitting spans in Zipkin JSON v2.
I updated my codes to
import { CollectorTraceExporter, CollectorProtocolNode } from '#opentelemetry/exporter-collector';
import { DocumentLoad } from '#opentelemetry/plugin-document-load';
import { SimpleSpanProcessor } from '#opentelemetry/tracing';
import { WebTracerProvider } from '#opentelemetry/web';
const provider = new WebTracerProvider({ plugins: [new DocumentLoad()] });
// The config below currently has issue
const exporter = new CollectorTraceExporter({
serviceName: 'my-service',
protocolNode: CollectorProtocolNode.HTTP_JSON,
url: 'http://localhost:9411/api/v1/spans', // Also tried v2
});
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
However, I got bad request for both v1 and v2 endpoints without any response body returned
POST http://localhost:9411/api/v1/spans 400 (Bad Request)
POST http://localhost:9411/api/v2/spans 400 (Bad Request)
Any idea how can I make the request format correct? Thanks
UPDATE (8/19/2020)
I think Andrew is right that I should use OpenTelemetry collector. I also got some help from Valentin Marchaud and Deniz Gurkaynak
at Gitter. Just add the link here for further people who meet same issue:
https://gitter.im/open-telemetry/opentelemetry-node?at=5f3aa9481226fc21335ce61a
My final working solution posted at https://stackoverflow.com/a/63489195/2000548
Thing is, you should use opentelemetry collector if you use opentelemetry exporter.
Pls see schema in attachment
Also I created a gist, which will help you to setup
pls see
https://gist.github.com/AndrewGrachov/11a18bc7268e43f1a36960d630a0838f
(just tune the values, export to jaeger-all-in-one instead of separate + cassandra, etc)
Node.js Alexa Task Issue
I'm currently coding a Node.js Alexa Task via AWS Lambda, and I have been trying to code a function that receives information from the OpenWeather API and parses it into a variable called weather. The relevant code is as follows:
var request = require('request');
var weather = "";
function isBadWeather(location) {
var endpoint = "http://api.openweathermap.org/data/2.5/weather?q=" + location + "&APPID=205283d9c9211b776d3580d5de5d6338";
var body = "";
request(endpoint, function (error, response, body) {
if (!error && response.statusCode == 200) {
body = JSON.parse(body);
weather = body.weather[0].id;
}
});
}
function testWeather()
{
setTimeout(function() {
if (weather >= 200 && weather < 800)
weather = true;
else
weather = false;
console.log(weather);
generateResponse(buildSpeechletResponse(weather, true), {});
}, 500);
}
I ran this snippet countless times in Cloud9 and other IDEs, and it seems to be working flawlessly. However, when I zip it into a package and upload it to AWS Lambda, I get the following error:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)"
]
}
I installed module-js, request, and many other Node modules that should make this code run, but nothing seems to fix this issue. Here is my directory, just in case:
- planyr.zip
- index.js
- node_modules
- package.json
Does anyone know what the issue could be?
Fixed it! My issue was that I tried to zip the file using my Mac's built-in compression function in Finder.
If you're a Mac user, like me, you should run the following script in terminal when you are in the root directory of your project (folder containing your index.js, node_modules, etc. files).
zip -r ../yourfilename.zip *
For Windows:
Compress-Archive -LiteralPath node_modules, index.js -DestinationPath yourfilename.zip
Update to the accepted answer: When this error occurs, it means your zip file is not in the valid form which AWS requires.
If you double click on zip you will find your folder inside that your code file,but lambda wants that when you double click on zip it shoud show direct code files.
To achieve this:
open terminal
cd your-lambda-folder
zip -r index.zip *
Then, upload index.zip to AWS Lambda.
Check that file name and handler name are same:
That means that zip file has bundle.js file that exports handler function:
exports.handler = (event, context, callback) => {//...}
In my case it was because I had the handler file in inner src directory.
I had to change the 'Handler' property within Lambda from:
index.handler
to
src/index.handler
This is probably a permissions issue with files inside your deployment zip.
Try chmod 777 your files before packaging them in a zip file.
In my case the archive contained a folder "src" with index.js file, so I had to put to the handler: "src/index.handler"
In my case I had to replace
exports.handler = function eventHandler (event, context) {
with
exports.handler = function (event, context, callback) {
I got this error when I was using lambci/lambda:nodejs8.10 in windows.
I'd tried all of the solution listed above but none of which could help me deal with my issue(even though the error stack look the same as the question).
Here is my simple solution:
using --entrypoint flag to run a container to find out if the file is mounted into the container. It turns out I may got the share drive issue with my Docker Desktop.
I switched my docker daemon that day before, but everything works fine except this problem.
Anyway, remount my drive to Docker Desktop, you can both use the docker command or just open the Docker Desktop setting to apply.
In my case this was caused by Node running out of memory. I fixed that by adding --memory-size 1500 to my aws lambda create-function ... command.
I am new to protobuf.
I have installed npm google-protobuf.
Following is my .proto file
syntax = "proto3";
package com.sixdee;
message Student{
string name = 1;
int32 id = 2;
}
And this is how i have generated the .js file
protoc --js_out=import_style=commonjs,binary:. testproto.proto
i have pasted the resulting testproto_pb.js in my project.
I am not able to build a protobuf packet.
I have tried
var student = new Student();
student.setName("Ankith");
student.setId(24);
I get Uncaught ReferenceError: Student is not defined
I have referred link. nothing seems to work for me.
Any help is deeply appreciated.
I'm not sure what is wrong with our code. It's hard to judge without the full source code.
I used protobufjs from npm as outlined here: http://webapplog.com/json-is-not-cool-anymore/
You need protobuf library on the front-end as well.
I downloaded this
demo server.
I follow the instruction, so
First, git clone this repo, and then run: npm install python-js. Now you are ready to run the server, run: ./run-demo.js and then open your browser to localhost:8080.
Unfortunately I can't run run-demo.js beacuse I have this error
---------------------------
Windows Script Host
---------------------------
Line: 1
Character: 1
Error: Invalid character
Code: 800A03F6
Source: Microsoft JScript - compilation error
I try to run this by node.js console but have only "..." and nothing happend.
This is code of run-demo.js:
#!/usr/bin/env node
var fs = require('fs')
//var pythonjs = require('../PythonJS/pythonjs/python-js')
var pythonjs = require('python-js')
var pycode = fs.readFileSync( './server.py', {'encoding':'utf8'} )
var jscode = pythonjs.translator.to_javascript( pycode )
eval( pythonjs.runtime.javascript + jscode )
Any ideas? I want to run local server and use PythonJS
I don't believe # is a valid character in Javascript. If the run0demo.js file is being delivered to your browser, it certainly won't know what to make of the shebang (#!) line, which is used by the UNIX kernel to determine which executbale should be used to process the file.
If anyone else will be looking for solution, here is it:
node run-demo.js
Simple as... ;)