I need to develop natural language querying tool for a structured database. I tried two approaches.
using Python nltk (Natural Language Toolkit for python) using
Javascript and JSON (for data source)
In the first case I did some NLP steps to format the natural query by doing removing stop words, stemming, finally mapping keywords using featured grammar mapping. This methodology works for simple scenarios.
Then I moved to second approach. Finding the data in JSON and getting corresponding column name and table name , then building a sql query. For this one, I also implemented removing stop words, stemming using javascript.
Both of these techniques have limitations.I want to implement semantic search approach.
Please can anyone suggest me better approach to do this..
Semantic parsing for NLIDB (natural language interface to data bases) is a very evolved domain with many techniques: rule based methods (involving grammars) or machine learning techniques. They cover a large range of query inputs, and offer much more results than pure NL processing or regex methods.
The technique I favor is based on Feature based context-free grammars FCFG. For starters, in the NTLK book available online, look for the string "sql0.fcfg". The code example shows how to map the NL phrase structure query "What cities are located in China" into an SQL query "SELECT City FROM city_table WHERE Country="china" via the feature "SEM" or semantics of the FCFG.
I recommend Covington's books
NLP for Prloog Programmers (1994)
Prolog Programming in Depth (1997)
They will help You go a long way. These PDF's are downloadable from his site.
As I commented, I think you should add some code, since not everyone has read the book.
Anyway my conclusion is that yes, as you said it has a lot of limitations and the only way to achieve more complex queries is to write very extensive and complete grammar productions, a pretty hard work.
Related
I have multiple datasets here that i took from Kaggle. There are multiple csv files and each csv file is made specifically for sit, stand, walking, running etc. The data is taken from sensors like accelerometers and gyroscopes. The values in datasets are of axis like x, y and z.
Sample Data
Here is a sample dataset of jogging. Now i need to make classifiers in my program so that my program can detect itself whether the data is of jogging, sitting, standing etc. I want to mix all the datasets in a single csv file and then upload it into my webapge and then i want the javascript code to start detecting whether a particular row is of sitting, standing, jogging etc. I don't want any code help but instead i just need a little explanation or a way to start coding it. How can i started making such classifier? I know it is kind of broad question but i think i have tried to explain myself in best way possible. Once my program has detected every row with specific activity it will count all the activities separately and then show it in a table format in webpage.
In order to answer properly to your question, it would be very helpful to know which is your level of understanding and experience with machine learning.
If you are a beginner I would suggest to try to run and understand a couple of tutorials that can be easily found on the web.
If you need an idea of which is the "standard" approach for machine learning development, I will try to give you a general idea of the process.
You can summarize the process in these main steps:
Data pre-processing-> Data splitting -> Feature selection -> Model Training -> Validation -> Deployment
Data pre-processing is meant to clean and format the data: removing NA values, decision about categorical variables, outlier analysis,.... This is a complex step that depends on the application. In your case I would start checking that the data in the different data-sets are homogeneous, i.e. the features have the same meaning across csv and corresponding features respect the same distribution. While the meaning of each feature should be explained in the description of your csv, the check of the distributions could be easily done plotting the box-plots for each feature and csv. If distributions of the same feature across different csv files don't overlap you should investigate further the issue.
An important step in the design of a good model is the splitting of the data. You should split your data in training/validation set (training/validation/test for a more comprehensive approach). This step allows you to train your model on the training set and test the model on the validation set computing unbiased performance of your model. I suggest here to become familiar with concepts as: Cross Validation, stratified-cross-validation, nested-cross-validation for hyper-parameter tuning, overfitting, bias,.... The Validation of the model will give you an idea of the expected performance that it will have on unseen data. If you are considering the use of more than one model, you can use the validation results to choose the "best" one. I suggest here a comparison using the confidence interval or if possible a significance test (e.g t-test, anova,...). Before the deployment the model is trained on all the available data.
The choice of the model depends on the data that you are using: number of samples, number of features, type of variable (numerical, categorical),....
I'm not an expert of javascript, but I believe (just a feeling) that python and R are more common choices for developing Machine learning applications. Both have libraries specifically developed for the task and you can find a lot of materials and tutorial around.
With a bit of more context I think that I could be more specific.
I hope it helps
I'm in the process of designing my first serious RESTful API, which will sit above a WCF service.
There are resources like; outlet, schedule and job. A schedule is always owned by an outlet, and a schedule will contain 0 or more jobs. A job does not have to be on a schedule.
I keep coming back to thinking that resources should be addressable in the same type of way resources are addressed on a file system. This would mean I'd have URI's like:
/outlets
/outlets/4/schedules
/outlets/4/schedules/1000/jobs
/outlets/4/schedules/1000/jobs/5123
Things start to get messy though when considering how to pull resources back under different situations though.
e.g. I want a job not on a schedule:
/outlets/4/jobs/85 (this now means we've got 2 ways to pull a job back that's on a schedule)
e.g. I want all schedules regardless of outlet:
/schedules or /outlets/ALL/schedules
There are also lots of other more complex requirements but I'm sure you get the gist.
File system's have a good, logical way of addressing resources. You can obviously create symbolic links and achieve something approximating what I describe but it'd be messy. It'll be even messier once things get even slightly more complex, such as adding the ability to get schedules by date:
/outlets/4/2016-08-29/schedules
And without using query string parameters I'm not even sure how I'd request back all jobs that are NOT on a schedule. The following feels wrong because unscheduled is not a resource:
/outlets/4/unscheduled/jobs
So, I'm coming to think that file system type addressing is only going to work for the simplest of services (our underlying system has hundreds of entity types, with some very complex relationships and a huge number of operations).
Having multiple ways of doing the same thing tends to lead to confusion and messy documentation and I want to avoid it. As a result I'm almost forced to opt for going with the lowest common denominator and choosing very simple address forms - like the 3rd one below:
/outlets/4/schedules/1000/jobs/5123
/outlets/4/jobs/5123
/jobs/5123
From these very simple address forms I would then need to expand using query string parameters to do anything more complex, e.g:
/jobs?scheduleId=1000
/jobs?outletId=4
/jobs?outletId=4&fromDate=2016-01-01&toDate=2016-01-31
This feels like it's going against the REST model though and query string parameters like this aren't predictable, so far from the "no docs needed" idea.
OK, so at the minute I'm almost on the side of the fence that is saying in order to get a clean, maintainable API I'm going to have to go with very simple resource addresses, use query string parameters extensively and have good documentation.
Anyway, this doesn't feel like the conclusion I should have arrived at. Where have I gone wrong?
Welcome to the world of REST API programming. These are the hard problems which we all face when trying to apply general principles to specific situations. There is no clear and easy answer to your questions, but here are a few additional tips that may be useful.
First, you're right that the file-system approach to addressing breaks down when you have complex relationships. You'll only want to establish that sort of addressing when there's a true hierarchy there.
For example, if all jobs were part of a single schedule, then it would make sense to look to schedules/{id}/jobs/{id} to get to a given job. If you think of it from a data-storage perspective, you could imagine there being an XML file for each schedule, and the jobs would just be elements within that file.
However, it sounds like in this particular case your data is more relational. From a data-storage perspective, you'd represent each job as a row in a database table, and establish some foreign key relationships to tie some jobs to some schedules. Your addressing scheme should reflect this by making /jobs a top-level endpoint, and using optional query string parameters to filter by schedule or outlet when it makes sense to do so.
So you're on the right track. One more thing you might want to consider is OData, which extends the basic REST principles with a standards-oriented way of representing things like filtering with query string parameters. The address syntax feels a little "out there", but it does a pretty good job of handling the situations where straight REST starts falling apart. And because it's more standardized, there are tools available to help with things like translating from a data layer into an OData endpoint, or generating client-side proxy helpers based on the metadata exposed by that endpoint.
This feels like it's going against the REST model though and query string parameters like this aren't predictable, so far from the "no docs needed" idea.
If you use OData, then its spec combines with the metadata produced by your tooling to become your documentation. For example, your metadata says that a job has a date property which represents a date. Then the OData spec provides a way to represent filter queries for a date value. From this information, consumers can reliably produce a filter query that will "just work" because you're using a framework server-side to do the hard parts. And if they don't feel like memorizing how OData URLs work, they can generate a client proxy in the language of their choice so they can generate the appropriate URL via their favorite syntax.
I’m trying to show a list of lunch venues around the office with their today’s menus. But the problem is the websites that offer the lunch menus, don’t always offer the same kind of content.
For instance, some of the websites offer a nice JSON output. Look at this one, it offers the English/Finnish course names separately and everything I need is available. There are couple of others like this.
But others, don’t always have a nice output. Like this one. The content is laid out in plain HTML and English and Finnish food names are not exactly ordered. Also food properties like (L, VL, VS, G, etc) are just normal text like the food name.
What, in your opinion, is the best way to scrape all these available data in different formats and turn them into usable data? I tried to make a scraper with Node.js (& phantomjs, etc) but it only works with one website, and it’s not that accurate in case of the food names.
Thanks in advance.
You may use something like kimonolabs.com, they are much easier to use and they give you APIs to update your side.
Remember that they are best for tabular data contents.
There my be simple algorithmic solutions to the problem, If there is a list of all available food names this can be really helpful, you find the occurrence of a food name inside a document (for today).
If there is not any food list, You may use TF/IDF. TF/IDF allows to calculate the score of a word inside a document among the current document and also other documents. But this solution needs enough data to work.
I think the best solution is some thing like this:
Creating a list of all available websites that should be scrapped.
Writing driver classes for each website data.
Each driver has the duty of creating the general domain entity from its standard document.
If you can use PHP, Simple HTML Dom Parser along with Guzzle would be a great choice. These two will provide a jQuery like path finder and a nice wrapper arround HTTP.
You are touching really difficult problem. Unfortunately there are no easy solutions.
Actually there are two different parts to solve:
data scraping from different sources
data integration
Let's start with first problem - data scraping from different sources. In my projects I usually process data in several steps. I have dedicated scrapers for all specific sites I want, and process them in the following order:
fetch raw page (unstructured data)
extract data from page (unstructured data)
extract, convert and map data into page-specific model (fully structured data)
map data from fully structured model to common/normalized model
Steps 1-2 are scraping oriented and steps 3-4 are strictly data-extraction / data-integration oriented.
While you can easily implement steps 1-2 relatively easy using your own webscrapers or by utilizing existing web services - data integration is the most difficult part in your case. You will probably require some machine-learning techniques (shallow, domain specific Natural Language Processing) along with custom heuristics.
In case of such a messy input like this one I would process lines separately and use some dictionary to get rid Finnish/English words and analyse what has left. But in this case it will never be 100% accurate due to possibility of human-input errors.
I am also worried that you stack is not very well suited to do such tasks. For such processing I am utilizing Java/Groovy along with integration frameworks (Mule ESB / Spring Integration) in order to coordinate data processing.
In summary: it is really difficult and complex problem. I would rather assume less input data coverage than aiming to be 100% accurate (unless it is really worth it).
I am developing a local web application with jQuery/JavaScript.
My goal is to create search engine for searching content from a JSON file. I already made it with regex, but it works slowly.
What is the best way? Is there a JavaScript search engine?
Try lunr.js which supports full-text search in JavaScript.
The term "search engine" normally means that a large set of data is indexed (a resource intensive task). Searching the data set after indexing is then quick. If the data set is very large, it is more likely that indexing and search will be performed on the server (and only the search results are then returned to the browser).
If you just need to search fields in a JSON file that is small or medium in size, then consider JavaScript "search algorithms" rather than search engines.
Fuse.js is a lightweight fuzzy-search, in JavaScript, with zero dependencies. It has more github stars than Lunr, and has ongoing (and recent) commits as of August 2022.
I think it doesn't exist, you have developpe it
put your application contents in String object and developpe search function with indexOf (for target string) & substring (to extract fragment)
Have a look at fullproof, it's a basic search engine for use int he browser http://reyesr.github.com/fullproof/
There may be others though.
I think you have two options. Lunr that is mentioned earlier and search-index.
Elasticlunr is a lightweight full-text search engine in Javascript for browser search and offline search. Elasticlunr.js is developed based on Lunr.js, but more flexible than lunr.js.
Elasticlunr.js provides Query-Time boosting and field search. Elasticlunr.js is a bit like Solr, but much smaller and not as bright, but also provide flexible configuration and query-time boosting.
During my routine work, i happened to write the chained javascript function which is something like LINQ expression to query the JSON result.
var Result = from(obj1).as("x").where("x.id=5").groupby("x.status").having(count("x.status") > 5).select("x.status");
It works perfectly and give the expected result.
I was wondering this looks awesome if the code is written like this (in a more readable way)
var Result = from obj1 as x where x.status
groupby x.status having count(x.status) > 5
select x.status;
is there a way to achieve this??
Cheers
Ramesh Vel
No. JavaScript doesn't support this.
But this looks quite good too:
var Result = from(obj1)
.as("x")
.where("x.id=5")
.groupby("x.status")
.having(count("x.status") > 5)
.select("x.status");
Most people insist on trying to metaprogram from inside their favorite language. That doesn't work if the language doesn't support metaprogramming well; other answers have observed that JavaScript does not.
A way around this is to do metaprogramming from outside the language, using
program transformation tools. Such tools can parse source code, and carry out arbitrary transformations on it (that's what metaprogramming does anyway) and then spit the revised program.
If you have a general purpose program transformation system, that can parse arbitrary languages, you can then do metaprogramming on/with whatever language you like. See our DMS Software Reengineering Toolkit for such a tool, that has robust front ends for C, C++, Java, C#, COBOL, PHP, and ECMAScript and a number of other programming langauges, and has been used for metaprogramming on all of these.
In your case, you want to extend the JavaScript grammar with new syntax for SQL queries, and then transform them to plain JavaScript. (This is a lot like Intentional Programming)
DMS will easily let you build a JavaScript dialect with additional rules, and then you can use its program transformation capabilities to produce the equivalent standard Javascript.
Having said, that, I'm not a great fan of "custom syntax for every programmer on the planet" which is where Intentional Programming leads IMHO.
This is a good thing to do if there is a large community of users that would find this valuable. This idea may or may not be one of them; part of the problem is you don't get to find out without doing the experiment, and it might fail to gain enough social traction to matter.
although not quite what you wanted, it is possible to write parsers in javascript, and just parse the query (stored as strings) and then execute it. e.g.,using libraries like http://jscc.jmksf.com/ (no doubt there are others out there) it shouldnt be too hard to implement.
but what you have in the question looks great already, i m not sure why you'd want it to look the way you suggested.
Considering that this question is asked some years ago, I will try to add more to it based on the current technologies.
As of ECMAScript 6, metaprogramming is now supported in a sense via Symbol, Reflect and Proxy objects.
By searching on the web, I found a series of very interesting articles on the subject, written by Keith Kirkel:
Metaprogramming in ES6: Symbols and why they're awesome
In short, Symbols are new primitives that can be added inside an object (without practically being properties) and are very handy for passing metaprogramming properties to it among others. Symbols are all about changing the behavior of existing classes by modifying them (Reflection within implementation).
Metaprogramming in ES6: Part 2 - Reflect
In short, Reflect is effectively a collection of all of those “internal methods” that were available exclusively through the JavaScript engine internals, now exposed in one single, handy object. Its usage is analogous to the Reflection capabilities of Java and C#. They are used to discover very low level information about your code (Reflection through introspection).
Metaprogramming in ES6: Part 3 - Proxies
In short, Proxies are handler objects, responsible for wrapping objects and intercepting their behaviors through traps (Reflection through intercession).
Of course, these objects provide specific metaprogramming capabilities, much more restrictive compared to metaprogramming languages, but still can provide handy ways of basic metaprogramming, mainly through Reflection practices, in fact.
In the end, it is worth mentioning that there is some worth-noticing ongoing research work on staged metaprogramming in JavaScript.
Well, in your code sample:
var Result = from(obj1)
.as("x")
.where("x.id=5")
.groupby("x.status")
.having(count("x.status") > 5)
.select("x.status");
The only problem I see (other than select used as an identifier) is that you embed a predicate as a function argument. You'd have to make it a function instead:
.having(function(x){ return x.status > 5; })
JavaScript has closures and dynamic typing, so you can do some really nifty and elegant things in it. Just letting you know.
In pure JS no you can not. But with right preprocessor it is possible.
You can do something similar with sweet.js macros or (God forgive me) GPP.
Wat you want is to change the javascript parser into an SQL parser. It wasn't created to do that, the javascript syntax doesn't allow you to.
What you have is 90% like SQL (it maps straight onto it), and a 100% valid javascript, which is a great achievement. My answer to the question in the title is: YES, metaprogramming is possible, but NO it won't give you an SQL parser, since it's bound to use javascript grammar.
Maybe you want something like JSONPath if you've got JSON data. I found this at http://www.json.org/. Lots of other tools linked to from there if it's not exactly what you need.
(this is being worked on as well: http://docs.dojocampus.org/dojox/json/query)