JavaScript essentially only has the type Object (with some optimizations for value types too, plus "absence" values). Under any circumstances can I be academically correct and describe a user-defined object as a type?
e.g.
function MyType() {}
MyType.prototype.myMethod = function() {}
Can MyType be thought of as a type - albeit one not recognized by a compiler?
Put another way - is the textbook definition of a type something that is recognized by a compiler, or can a type simply be thought of as something that has a certain interface?
Seems like the body of your question is asking a different question than the title. In an academic context, e.g. a data structures textbook, "abstract data type" refers to things like Linked Lists, Stacks, Binary Trees, etc.
These are not necessarily implemented in an Object Oriented language (in my Data Structures university course, we used C) -- they could consist of a group of structs an functions. So to answer the question in your title, no they're not the same.
I come from a .NET background, and Microsoft seems to like having methods and properties that begin with uppercase letters. e.g. PrintMessage();
JavaScript seems to follow Java's rules for naming in that function names should begin with lowercase letters. e.g. printMessage();
JavaScript (ECMAScript version 5) now supports accessor properties, similar to properties in .NET. Again .NET likes to have properties beginning with uppercase letters. e.g. Message
So my question is, which naming convention should JavaScript use for properties?
e.g. Message or message?
I've already looked through some style guides, but I can't see much on the subject of properties.
Douglas Crockford's take on the subject.
Names
Names should be formed from the 26 upper and lower case letters (A .. Z, a .. z), the 10 digits (0 .. 9), and _ (underbar). Avoid use of international characters because they may not read well or be understood everywhere. Do not use $ (dollar sign) or \ (backslash) in names.
Do not use _ (underbar) as the first character of a name. It is sometimes used to indicate privacy, but it does not actually provide privacy. If privacy is important, use the forms that provide private members. Avoid conventions that demonstrate a lack of competence.
Most variables and functions should start with a lower case letter.
Constructor functions which must be used with the new prefix should start with a capital letter. JavaScript issues neither a compile-time warning nor a run-time warning if a required new is omitted. Bad things can happen if new is not used, so the capitalization convention is the only defense we have.
Global variables should be in all caps. (JavaScript does not have macros or constants, so there isn't much point in using all caps to signify features that JavaScript doesn't have.)
W3C's take on the subject
Call things by their name — easy, short and readable variable and function names
This is a no-brainer but it is scary how often you will come across variables like x1, fe2 or xbqne in JavaScript, or — on the other end of the spectrum — variable names like incrementorForMainLoopWhichSpansFromTenToTwenty or createNewMemberIfAgeOverTwentyOneAndMoonIsFull.
None of these make much sense — good variable and function names should be easy to understand and tell you what is going on — not more and not less. One trap to avoid is marrying values and functionality in names. A function called isLegalDrinkingAge() makes more sense than isOverEighteen() as the legal drinking age varies from country to country, and there are other things than drinking to consider that are limited by age.
Hungarian notation is a good variable naming scheme to adopt (there are other naming schemes to consider), the advantage being that you know what something is supposed to be and not just what it is.
For example, if you have a variable called familyName and it is supposed to be a string, you would write it as sFamilyName in “Hungarian”. An object called member would be oMember and a Boolean called isLegal would be bIsLegal.It is very informative for some, but seems like extra overhead to others — it is really up to you whether you use it or not.
Keeping to English is a good idea, too. Programming languages are in English, so why not keep this as a logical step for the rest of your code. Having spent some time debugging Korean and Slovenian code, I can assure you it is not much fun for a non-native speaker.
See your code as a narrative. If you can read line by line and understand what is going on, well done. If you need to use a sketchpad to keep up with the flow of logic, then your code needs some work. Try reading Dostojewski if you want a comparison to the real world — I got lost on a page with 14 Russian names, 4 of which were pseudonyms. Don't write code like that — it might make it more art than product, but this is rarely a good thing.
wikipedia's take on the subject
JavaScript
The built-in JavaScript libraries use the same naming conventions as Java. Classes use upper camel case (RegExp, TypeError, XMLHttpRequest, DOMObject) and methods use lower camel case (getElementById, getElementsByTagNameNS, createCDATASection). In order to be consistent most JavaScript developers follow these conventions. See also: Douglas Crockford's conventions
I'm reading a slide deck that states "JavaScript is untyped." This contradicted what I thought to be true so I started digging to try and learn more.
Every answer to Is JavaScript an untyped language? says that JavaScript is not untyped and offered examples of various forms of static, dynamic, strong, and weak typing that I'm familiar and happy with.. so that wasn't the way to go.
So I asked Brendan Eich, the creator of JavaScript, and he said:
academic types use "untyped" to mean "no static types". they are smart enough to see that values have types (duh!). context matters.
Do academically-focused computer science folks use "untyped" as a synonym of "dynamically typed" (and is this valid?) or is there something deeper to this that I am missing? I agree with Brendan that context is important but any citations of explanations would be great as my current "go to" books are not playing ball on this topic.
I want to nail this down so I can improve my understanding and because even Wikipedia doesn't refer to this alternative usage (that I can find, anyway). I don't want to mess up with either using the term or questioning the use of the term in future if I'm wrong :-)
(I've also seen a top Smalltalker say Smalltalk is "untyped" too, so it's not a one-off which is what set me off on this quest! :-))
Yes, this is standard practice in academic literature. To understand it, it helps to know that the notion of "type" was invented in the 1930s, in the context of lambda calculus (in fact, even earlier, in the context of set theory). Since then, a whole branch of computational logic has emerged that is known as "type theory". Programming language theory is based on these foundations. And in all these mathematical contexts, "type" has a particular, well-established meaning.
The terminology "dynamic typing" was invented much later -- and it is a contradiction in terms in the face of the common mathematical use of the word "type".
For example, here is the definition of "type system" that Benjamin Pierce uses in his standard text book Types and Programming Languages:
A type system is a tractable syntactic method for proving the absence
of certain program behaviors by classifying phrases according to the
kinds of values they compute.
He also remarks:
The word “static” is sometimes added explicitly--we speak of a
“statically typed programming language,” for example--to distinguish the
sorts of compile-time analyses we are considering here from the
dynamic or latent typing found in languages such as Scheme (Sussman
and Steele, 1975; Kelsey, Clinger, and Rees, 1998; Dybvig, 1996),
where run-time type tags are used to distinguish different kinds of
structures in the heap. Terms like “dynamically typed” are arguably
misnomers and should probably be replaced by “dynamically checked,”
but the usage is standard.
Most people working in the field seem to be sharing this point of view.
Note that this does not mean that "untyped" and "dynamically typed" are synonyms. Rather, that the latter is a (technically misleading) name for a particular case of the former.
PS: And FWIW, I happen to be both an academic researcher in type systems, and a non-academic implementer of JavaScript, so I have to live with the schisma. :)
I am an academic computer scientist specializing in programming languages, and yes, the word "untyped" is frequently (mis)-used in this way. It would be nice to reserve the word for use with languages that don't carry dynamic type tags, such as Forth and assembly code, but these languages are rarely used and even more rarely studied, and it's a lot easier to say "untyped" than "dynamically typed".
Bob Harper is fond of saying that languages like Scheme, Javascript, and so on should be considered typed languages with just a single type: value. I lean to this view, as it makes it possible to construct a consistent worldview using just one type formalism.
P.S. In pure lambda calculus, the only "values" are terms in normal form, and the only closed terms in normal form are functions. But most scientists who use the lambda calculus add base types and constants, and then you either include a static type system for lambda or you are right back to dynamic type tags.
P.P.S. To original poster: when it comes to programming languages, and especially type systems, the information on Wikipedia is of poor quality. Don't trust it.
I've looked into it, and found that the answer to your question is simply, and surprisingly, "yes": academic CS types, or at least some of them, do use "untyped" to mean "dynamically typed". For example, Programming Languages: Principles and Practices, Third Edition (by Kenneth C. Louden and Kenneth A. Lambert, published 2012) says this:
Languages without static type systems are usually called untyped languages (or dynamically typed languages). Such languages include Scheme and other dialects of Lisp, Smalltalk, and most scripting languages such as Perl, Python, and Ruby. Note, however, that an untyped language does not necessarily allow programs to corrupt data—this just means that all safety checking is performed at execution time. […]
[link] (note: bolding in original) and goes on to use "untyped" in just this way.
I find this surprising (for much the same reasons that afrischke and Adam Mihalcin give), but there you are. :-)
Edited to add: You can find more examples by plugging "untyped languages" into Google Book Search. For example:
[…] This is the primary information-hiding mechanism is many untyped languages. For instance PLT Scheme [4] uses generative structs, […]
— Jacob Matthews and Amal Ahmed, 2008 [link]
[…], we present a binding-time analysis for an untyped functional language […]. […] It has been implemented and is used in a partial evaluator for a side-effect free dialect of Scheme. The analysis is general enough, however, to be valid for non-strict typed functional languages such as Haskell. […]
— Charles Consel, 1990 [link]
By the way, my impression, after looking through these search results, is that if a researcher writes of an "untyped" functional language, (s)he very likely does consider it to be "untyped" in the same sense as the untyped lambda calculus that Adam Mihalcin mentions. At least, several researchers mention Scheme and the lambda calculus in the same breath.
What the search doesn't say, of course, is whether there are researchers who reject this identification, and don't consider these languages to be "untyped". Well, I did find this:
I then realized that there is really no circularity, because dynamically typed languages are not untyped languages — it's just that the types are not usually immediately obvious from the program text.
— someone (I can't tell who), 1998 [link]
but obviously most people who reject this identification wouldn't feel a need to explicitly say so.
Untyped and dynamically typed are absolutely not synonyms. The language that is most often called "untyped" is the Lambda Calculus, which is actually a unityped language - everything is a function, so we can statically prove that the type of everything is the function. A dynamically typed language has multiple types, but does not add a way for the compiler to statically check them, forcing the compiler to insert runtime checks on variable types.
Then, JavaScript is a dynamically typed language: it is possible to write programs in JavaScript such that some variable x could be a number, or a function, or a string, or something else (and determining which one would require solving the Halting Problem or some hard mathematical problem), so you can apply x to an argument and the browser has to check at runtime that x is a function.
Both statements are correct, depending on whether you are talking about values or variables. JavaScript variables are untyped, JavaScript values have types, and variables can range over any value type at runtime (i.e. 'dynamically').
In JavaScript and many other languages, values and not variables carry types. All variables can range over all types of values and may be considered "dynamically typed" or "untyped" - from the perspective of type-checking a variable that has no/unknowable type and a variable that can take any type are logically and practically equivalent. When type theorists talk about languages and types, they are usually talking about this - variables carrying types - because they are interested in writing type checkers and compilers and so on, which operate on program text (i.e. variables) and not a running program in memory (i.e. values).
By contrast in other languages, like C, variables carry types but values do not. In languages like Java, variables and values both carry types. In C++, some values (those with virtual functions) carry types and others do not. In some languages it is even possible for values to change types, although this is usually considered bad design.
While it is true that most of the CS researchers that write about types essentially consider only languages with syntactically-derivable types as typed languages, there are lots more of us using dynamically/latently typed languages who take umbrage at that usage.
I consider there to be 3 types [SIC] of languages:
Untyped - only the operator determines the interpretation of the value - and it generally works on anything. Examples: Assembler, BCPL
Statically typed - expressions/variables have types associated with them, and that type determines the interpretation/validity of the operator at compile-time. Examples: C, Java, C++, ML, Haskell
Dynamically typed - values have types associated with them, and that type determines the interpretation/validity of the operator at run-time. Examples: LISP, Scheme, Smalltalk, Ruby, Python, Javascript
To my knowledge, all dynamically-typed languages are type-safe - i.e. only valid operators can operate on values. But the same is not true for statically-typed language. Depending on the power of the type system used, some operators may be checked only at run-time, or not at all. For example, most statically-typed languages do not handle integer overflow properly (adding 2 positive integers can produce a negative integer), and out-of-bound array references are either not checked at all (C, C++) or are checked only at run-time. Further, some type systems are so weak that useful programming requires escape hatches (casts in C and family) to change the compile-time type of expressions.
All of this leads to absurd claims, such as that C++ is safer than Python because it's (statically-typed), whereas the truth is that Python is intrinsically safe while you can shoot your leg off with C++.
This question is all about Semantics
If I give you this data: 12 what is it's type? You have no way of knowing for sure. Could be an integer - could be a float - could be a string. In that sense it's very much "untyped" data.
If I give you an imaginary language which lets you use operators like "add", "subtract", and "concatenate" on this data and some other arbitrary piece of data the "type" is somewhat irrelevant (to my imaginary language) (example: perhaps add(12, a) yields 109 which is 12 plus the ascii value of a).
Let's talk C for a second. C pretty much lets you do whatever you want with any arbitrary piece of data. If you're using a function that takes two uints - you could cast and pass anything you want - and the values will simply be interpreted as uints. In that sense C is "untyped" (if you treat it in such a way).
However - and getting to Brendan's point - if I told you that "My age is 12" - then 12 has a type - at least we know it's numeric. With context everything has a type - regardless of the language.
This is why I said at the beginning - your question is one of semantics. What is the meaning of "untyped"? I think Brendan hit the nail on the head when he said "no static types" - because that's all it can possibly mean. Humans naturally classify things into types. We intuitively know that there is something fundamentally different between a car and a monkey - without ever being taught to make those distinctions.
Getting back to my example in the beginning - a language that "doesn't care about types" (per-se) may let you "add" an "age" and a "name" without producing a syntax error... but that doesn't mean it's a logically sound operation.
Javascript may let you do all sorts of crazy things without considering them "errors". That doesn't mean what you are doing is logically sound. Thats for the developer to work out.
Is a system/language which doesn't enforce type safety at compile/build/interpretation time "untyped" or "dynamically typed"?
Semantics.
EDIT
I wanted to add something here because some people seem to be getting caught up on "yeah, but Javascript does have some "types"".
In my comment on someone else's answer I said:
In Javascript I could have objects I've built up to be "Monkeys" and objects I've built up to be "Humans" and some functions could be designed to operate on only "Humans", others on only "Monkeys", and yet others on only "Things With Arms". Whether or not the language has ever been told there is such a category of objects as "things with arms" is as irrelevant to assembly ("untyped") as it is to Javascript ("dynamic"). It's all a matter of logical integrity - and the only error would be using something that didn't have arms with that method.
So, if you consider Javascript to have some "notion of types" internally - and, hence "dynamic types" - and think this is somehow "distinctly different from an untyped system" - you should see from the above example that any "notion of types" it has internally is really irrelevant.
To perform the same operation with C#, for example, I'd NEED an interface called ICreatureWithArms or something similar. Not so in Javascript - not so in C or ASM.
Clearly, whether or not Javascript has any understanding of "types" at all is irrelevant.
I am not a computer scientist, but I would be rather surprised if "untyped" were really used as a synonym for "dynamically typed" in the CS community (at least in scientific publications) as imho those two terms describe different concepts. A dynamically typed language has a notion of types and it enforces the type constraints at runtime (you can't for example divide an integer by a string in Lisp without getting an error) while an untyped language doesn't have any notion of types at all (e.g. assembler). Even the Wikipedia article about programming languages (http://en.m.wikipedia.org/wiki/Programming_language#Typed_versus_untyped_languages) makes this distinction.
Update: Maybe the confusion comes from the fact that some texts say something to the extent that "variables are not typed" in Javascript (which is true). But that doesn't automatically mean that the language is untyped (which would be false).
Agree with Brendan - context is everything.
My take:
I remember being confused, circa 2004, because there were arguments breaking out about whether Ruby was untyped or dynamically typed. Old school C/C++ people (of which I was one) were thinking about the compiler and saying Ruby was untyped.
Remember, in C, there are no runtime types, there are just addresses and if the code that's executing decides to treat whatever's at that address as something it isn't, whoops. That's definitely untyped and very different from dynamically typed.
In that world, "typing" is all about the compiler. C++ had "strong typing" because the compiler's checks were more stringent. Java and C were more "weakly typed" (there were even arguments about whether Java was strongly or weakly typed). Dynamic languages were, in that continuum, "untyped" because they had no compiler type checking.
Today, for practicing programmers, we're so used to dynamic languages, we obviously think of untyped to mean no compiler nor interpreter type-checking, which would be insanely hard to debug. But there was a period there where that wasn't obvious and in the more theoretical world of CS is may not even be meaningful.
In some deep sense, nothing can be untyped (or almost nothing, anyway) because you must have some intent in manipulating a value to write a meaningful algorithm. This is the world of theoretical CS, which isn't dealing with the specifics of how a compiler or interpreter is implemented for a given language. So "untyped" is (probably, I don't know) entirely meaningless in that context.
Is ruby strongly or weakly typed ?
Presumably the same is true for Javascript.
Ruby is "strong typed".
Strong typing means an object's type (not in the OOP sense, but in a general sense) is checked before an operation requiring a certain type is executed on it.
Weak typed means that no checking is done to ensure that the operation can succeed on the object. (For example, when a function accesses a string like and array of floats, if no type checking is done then the operation is allowed)
Edit:
It's been 6 years since this answer was posted and I think it warrants some extra clarifications:
Over the years the notion that "type safety is a dial not an absolute" started to be used in favor of the binary meaning (yes/no)
Ruby is "stronger" typed (with an "er") than most typical dynamic languages. The fact that ruby requires explicit statements for conversion IE: Array("foo"), "42".to_i, Float(23), brings the Ruby typing dial closer to the "Strong Typed" end of spectrum than the "weak typed".
So I would say "Ruby is a stronger typed dynamic language than most common dynamic languages"
Wikpedia labels it as "dynamic ('duck') typed".
Regarding Pop's comment about it being "strong-typed" - I'm not sure his explanation actually fits with what goes on under the covers. The MRI doesn't really "check" to see if an operation can be performed on an object; it just sends the object the message, and if that object doesn't accept that message (either by a method declaration or by handling it in #method_missing) it barfs. If the runtime actually checked to make sure operations were possible, #method_missing wouldn't work.
Also, it should be noted that since everything in Ruby is an object (and I do mean everything), I'm not sure what he said about "not in an oo-sense" is accurate. In Ruby, you're either an object or a message.
While you can get into arguments about the definition of those term I'd say:
Ruby dynamically and strongly typed while JavaScript is dynamically and weakly typed.
IMHO Ruby is strongly but dynamically typed.
I would consider these languages duck typed.
The over-simplified answer is that both ruby and javascript are weakly typed.
However this question is not quite as clear-cut as it may seem - see this wikipedia article for a more in-depth discussion on the difference between strongly and weakly typed languages.
I just stumbled-on this old thread but thought it proper that I could offer my opinion. (No, I'm not "hijacking" a zombie-thread.)
My colloquial interpretation of the term "strongly typed™" specifically refers to "compile time." (Which is something that many languages today, including Ruby, "simply do not have.")
For instance, a simple assignment statement such as a = b; could be judged by the compiler to be acceptable or not, based on its assessment of the "types" of a and b, and based on provisions for type-conversion (if applicable) provided by the programmers. If the statement was judged unacceptable, a compile-time error would be thrown and no "executable" would ever be produced.
This notion, of course, is not compatible with the fundamental design precepts of languages such as Ruby, PHP, Perl, JavaScript, or a great many other languages that are in extremely-wide (and, extremely-successful) use today. (Mind you, I do not mean this as a "judgment" either for or against them. They are what they are, and they sure do bring home the bacon.)
Because these languages do not have a "compile time," to my(!) colloquialism they cannot be called, "strongly typed." They are obliged to make decisions at runtime which, by their design, could not have been made sooner.
(Also please note that I am specifically excluding from consideration the various "lint tools" that have emerged for this-or-that language in an effort to catch more bugs in advance. These are very useful, yes yes, but not the same thing.)
(I am also purposely excluding various excellent(!) tools that generate source-code in the various target-languages in question ... for the same reasons.)
And – I say once again – I am making a classification, not a judgment.