Tuesday, January 20, 2009

Funny thing is, I can see this actually happening.


4 comments:

Matthew said...

That's one of my favorties, along with the one about computational linguistics.

The Reality Studio said...

This happens all day every day, but not just to literary theorists. Question for Matt: how long do you think you could fool grad students into discussing a new temporal measure theory [which sounds real enough, I guess] you made up ten minutes ago. My guess...days.

Matthew said...

At the very least, if I don't manage to bullshit it long enough to turn it into an actual paper of some sort. Then... who knows.

The thing of it is though, anybody can do this with almost anything. The key is being able to apply the pattern analysis techniques to your pattern analysis itself. Find the patterns in one subject, find the patterns in another. Then take those patterns and generalize them out to form one more complex set of patterns which applies equally well to both. This lets you create a synergy between the subjects you do know, and the subjects you don't.

Par example, My job is doing research into theoretical machine learning methods for natural language processing. So far I've adapted and designed a new method for attacking NLP classification/labeling problems using a brand new unsupervised machine learning framework developed from nonparametric bayesian statistics, introduced a new method for identifying errors within those classifications by equilibriating transition matrices based upon their markov reversibility, and am currently constructing a set of methods by which to minimize the local and global risk functionals involved with pattern recognition in answer fusion for automatic complex question answering.

Most people would consider that having done that I should have some expertise in either ML or NLP. My prior training in Machine Learning? Zilch. Natural Language Processing expertise? Nada. I do however know the physics and math I was taught as an undergrad. My proposed solution to the error correction problem is almost a direct adaptation of a method designed to predict spin flips in the 1D ising model. Constructing a Risk functional came entirely out of the principle of least action from Lagrangian Mechanics. To go the other way, one of my next free research time projects will be to adapt the unsupervised machine learning to predicting temporal state transitions to determine the local distances between time states in my and Dr.Houston's finite cyclic group quantum clock model and thus determine the constant distance between numerically successive states.

While I've certainly had to learn a bit so far, I've had to learn a whole lot less than I would have if I weren't able to easily identify the similarities between the two sets of problems (NLP and physics) and pass knowledge back and forth. That skill is entirely generalized by the type of pattern recognition and paranoid critical thinking developed in IPR.

Matthew said...

Sorry, that got a little bit longer winded, and perhaps a bit pompous. The point was not to pimp out my research interests - Ok, maybe a little, but everybody pimps out their research. You'd never get grants otherwise. Still, the reason was to provide a personal anecdote on how the skills can be applied in real life and the advantages they can provide. I work with software developers who actually are experts in this field, and I've had them come to me to get my take on problems specifically because of these skills.