In week seven, we started working with a new abstract data type- trees. It was a notable experience- not just because of how useful they are, but because I worked with them in my "intro to syntactic patterns" course last semester. Oh, the nostalgia.
Anyways, if my understanding of trees is correct, they are simply a way of hierarchically storing data. There are relatively few parameters to worry about- it really just boils down to how "high" or "low" in the tree the data is relative to another piece of data. There are nodes- which are the elements in the tree which store the data, and there are branches- the lines which connect them and determine their relative positions. One Node must be the root which, contrary to it's name, is actually at the very top of the tree. In other words, there does not exist a node in the tree such that the root node would be a child or descendant to it (more on the definition of "child" later). Likewise, there are also leaves, which occur at the very bottom of the tree. The hierarchy of the tree is preserved by having a general rule, which is that for any given node (except the root), there must be only one node which is higher on the tree and there is a branch which connects it to that node. This superordinate node is referred to as a "parent", while the node itself is the "child". A node can have as many children as you desire- but each can only have 1 parent. In the end, this translates to a triangular object in which 1 node branches off downwards to a few more, than each of those translate down to a few more each, until you get the the bottom.
An example of a tree data type, pulled right from linguistics (and inspired by my first post), would be a syntactic tree. Syntactic trees are a method to model the way our minds organize the words and concepts behind them into, what we would deem "grammatical" sentences. It's the reason I cant spit just word soup out expect and you follow to it. Long story short, words have broad categories, and they are organized in a particular order in respect to those categories. Here is an image of a syntactic tree:
As you can see, different types of words seem to appear in different positions in the tree, based on certain relevant features. For instance, the phrase "is[cop] a mathematics machine" sits to the right of the node, "the computer". It appears that, whenever a property is predicated onto a subject, the phrase containing that property must occur to the right of it. Their position relative to each other on the sentence node "S" signifies which of the phrases is the subject and which is the predicate (at least, for simple active sentences).
I hope I have demonstrated a strong intuitive grasp of the idea behind tree structures, and their many applications. I'll write about them out later, but for now, I need sleep.
Syntactic Tree borrowed from article: Using nltk_lite’s chunk parser to detect prosodic phrase boundaries in the Aix-MARSEC corpus of spoken English. Although the article is focused on natural language processing and not linguistic theory per se, it still illustrates what a syntactic tree looks like.