Archive for the ‘Uncategorized’ Category

Common Sense 1

October 29, 2015

Common Sense

Assassinating Gandhi’s Character

October 2, 2015

Today is Gandhi Jayanti, and as usual, there’s no shortage of people who people who are trying to take him down. Sex is the easiest route to do so, since the man spent so much time thinking and writing about his lack of involvement with it. There’s been a long tradition of people trying to show that Gandhi had inappropriate contact with women.

For example:

It turns out, it was an Australian actor. The ripped muscles should have given it away, but at least the photo was not a fake. It was a real life masquerade. This morning, I got a photo in several channels:

That photo came with a tagline: “jan jagaran laana hai to share zaroor karein,” i.e., if you want to awaken the people, do share. As in tell the whole world that the man was a fake. You can well imagine who in today’s dispensation would want the world to awaken to Gandhi’s sins.

Except that the fake is a really bad one. One look at the woman on the left and it’s clear that she is dressed as no one would have been in Gandhi’s lifetime. This is no masquerade; it’s a fake. Here’s the original:

Perhaps the single most famous picture of Gandhi and Nehru in one frame. Except that in the doctored version, the first prime minister of India has been photoshopped out of the picture and replaced by a smiling damsel. How Freudian! There’s nothing the right wingers wouldn’t like more than to write the Nehru dynasty out of India’s history, and even better if you can throw some mud on Gandhi while doing so. I don’t know why anyone cares about whom a leader sleeps with and where, but that’s the world we live in.

Saffron friends: you can do better; at least pick a lesser known photo.

The First Few Jayaries

October 2, 2015

I have been working on a re-interpretation/re-narration of the Mahabharata called “Jayary.” Here are the first few episodes in that series. If you like what you see, do subscribe!

Jayary: The First Few Episodes

https://www.listicle.co/list/lib/plugins/iframe-resizer/js/iframeResizer.min.jshttps://www.listicle.co/list/embed/list.php?listid=212817https://www.listicle.co/list/lib/plugins/iframe-resizer/js/iframeResize.min.js

Economics as Informatics

September 10, 2015

The dismal science, aka economics, remains one of the great ponzi schemes of all times.  Let me remind you what a ponzi scheme is: cigar chewing salesman gets you to buy snake oil with the promise of major returns. In order to make good on his promise to you, he finds two other dupes to invest in snake oil and he uses those investments to pay you off, and then goes looking for four other suckers to invest in the first three. The best ponzi scheme is one where the head snake oil vendor lets the investors in on the secret and tells them that the only way they can recoup their investment is by getting others to take on the risk. 

Isn’t that how economics works. Lets see:

  1. IMF/World Bank gets Manmohan Singh to sign off on neoliberal policies
  2. FM/PM recruits chief ministers
  3. Chief ministers recruit businessman
  4. Businessmen sell dreams to the aam aadmi.
  5. Aam aadmi takes the fall for all the above 

The only way economics succeeds is by getting the next generation of dupes to buy into the latest theory of how the world should be run by a shadowy cabal. In other words:

Pointy heads in the service of fat wallets

Be that as it may, it looks like a new group of pointy heads is on the march, as data and software begin to eat into economics. Surprise surprise, Silicon Valley economics is gaining ground over Kennedy School economics. 

//platform.twitter.com/widgets.js

Metaphorical Power

September 4, 2015

Freud is never mentioned in my intellectual community; of course, his ideas about repression and unconscious desires have been transformed into universal metaphors, but he’s no longer an influence in the circles I frequent. Part of the problem is that he and his followers thought of themselves as scientists and the science turned out to be less than satisfactory. Even the name, psychoanalysis, tells us that the bearded doctor was marketing an analytical tool. Unfortunately, tools become obsolete rather quickly and we have mostly forgotten Freud’s toolkit. That’s a pity, because I think Freud pioneered a modern approach to meaning making, of understanding one’s world that wasn’t tied (overtly, anyway) to a religious tradition. After all, we are semantic creatures and we need to make sense of the world, not just analyze it. 

To a large extent, the psychoanalytic worldview has been replaced by a cognitive one, which in turn is being replaced by a neuroscientific worldview. The first cognitive revolution started in the fifties with Chomsky using ideas from formal grammar to model natural language syntax. From that revolution came the idea that:

  1. The mind is a computer
  2. The mind is isolated

The first cognitive revolution remains the dominant model of the mind; for example, Steven Pinker, in his “How the Mind Works” declares that the mind is a computer. Increasingly we see scientists replace mind by brain, so that:

  1. The brain is a computer
  2. The brain is isolated

That these assumptions are flawed is an understatement; I think they are not even wrong. The second cognitive revolution, which is still struggling to get underway, questions both assumptions at many levels, but it still struggles with a tacit focus on individual, isolated minds rather than fully fleshed people.  This post begins an exploration of cognitive synthesis, of how we can make meaning of the human world within a cognitive framework. 

A couple of decades ago, George Lakoff and his collaborators initiated a major shift in linguistics when they started paying attention to non-literal uses of language such as metaphors. Out of that was born conceptual metaphor theory, which claims that thinking is mostly metaphorical, i.e., the direct opposite of the logical, computational mind of the first cognitive revolution. In the Lakoffian world, we understand bravery as a form of lion-heartedness. Fair enough, but I never understood why one chooses to call someone lion-hearted instead of calling them bear-hearted. What prompts the actual use of the metaphor and it’s success in conveying meaning? Where does it’s power come from? 

One obvious answer: metaphorical power reflects real power

I have been thinking about the infiltration of capitalism in all human affairs. Once upon a time, we were artists. Now, there’s a creative class. Activists became social entrepreneurs. These are metaphorical shifts that reflect the power of capital to shape our language and the way we understand the world. At some point, metaphor turns into fact, as social change turns into change.org and we start testing the effectiveness of social policies using double blind randomized field trials. 

I am getting ahead of myself; let me get back to metaphors. Capital when used as  a suffix is a metaphor generator, with social capital and natural capital being two widespread use cases. Take a look at the graph below, from a google Ngram search for “natural capital” and “social capital”:

 

The two terms have almost no provenance until about 1989 or so, when they take off rapidly. Not surprising at all, considering that the Berlin wall has come down, the Soviet Union is collapsing and as Fukuyama famously said, “we are at the end of history.” That might well be true, but the end of history looks awfully like the beginning of a new lexicon. One graph does not a theory make, but it does point towards an interesting line of research. I bet you anything that hard power (money) influences soft power (metaphor) that in turn get’s institutionalized (via marketing) into hard power. 

Consciousness Unexplained

August 30, 2015

Academic work, like any other human activity, is dependent on constant practice.  Writing routines are hard to re-establish once they are broken. If you go away to a conference for a week, the momentum that has been built up before that period disappears and is replaced by its opposite, i.e., an aversion to putting thoughts to paper. You could say that this is a psychological law of inertia, i.e., you are likely to keep doing things the way you did in the past few days and so if your routine gets upended for some external reason, its going to percolate into your life even when the intrusion disappears. I guess that explains why privacy is important for any kind of creative work because constant intrusions can upset your inertial state even when the offending person goes away (as opposed to self driven interactions with peers, where you are no longer in the work frame, so its not seen in your subconscious as an intrusion at all).
Anyway, this psychological law of inertial is not what this post is about. I have been thinking about what is called the “hard problem of consciousness”. By the hard problem, philosophers and cognitive scientists mean at least two different things:

(a) Why is it that there is anything like the qualitative aspect of an experience such as the enticing red of a local New England apple picked in September that burst with flavour when bitten and

(b) The uniquely subjective, “first person” character consciousness where supposedly you cannot tell whether I am having the experience of a red apple or a blue mango even if we are seeing the same object.

What seems really strange is that the subjective first person character of an experience of biting into an apple can be studied and even understood from an objective scientific point of view. Indeed, if I was running a apple orchard, I could test my apples for some combination of chemicals that increase their perceived taste and hybridize tastier varieties even if I didn’t have a taste bud on my tongue.

In other words, objective quantities can be reliable signatures of subjective experiences.  Modern economies depend (in fact, enforce) on our signatures on dotted lines standing for our commitment to various actions. Here is where the problem of consciousness really comes in: On the one hand, these signatures stand for our presence, but on the other hand they are not really us. Nobody would confuse you for your signature on a cheque, but in some sense, that signature is also you, as far as the domain of commerce is concerned. So, is the cheque part of you or not?

We seem to have varying intuitions when it come to collapsing the distinction between signatures and the things that the signatures represent. Turing, in his famous Turing test for intelligence argued that the signature is the thing itself when it comes to intelligence. According to the Turing test, a computer that cannot be distinguished from a human being as far as verbal behaviour is concerned is as intelligent as a human being, i.e., the signature of intelligence is the same as intelligence itself.

The same puzzle can be seen in our intuitions about the relationship between minds and our brains: if brain activities are reliable signatures of our mental states, then are they the same as our mental states? Or, to take another example: our facial gestures are reliable indicators of our emotional state, so should we identify facial gestures with their emotions? One can see the real quandary that arises in this case: while my feeling of joy doesn’t seem to be the same as my smile, the smile is surely part of the feeling of joy, its not just an abstract indicator of my joy.

Here is the heart of the problem of consciousness then: while objective facts, behaviours, chemical states etc are reliable indicators of our experiences, they are no more than signatures of our experience. To know a signature is to know enough about the object as far as current norms of scientific inquiry (i.e., inquiry based on the criteria of prediction and explanation) is concerned. If I know the path that the moon took last month when it revolved around the earth (the signature in this case) then I know as much as I need to in order to predict the future behaviour of the moon.

But predictive, explanatory knowledge is not enough for understanding experience. To take the emotion example again, while I can predict that you are angry by reading your facial gestures (and flee if needed), I don’t know what anger feels like to you. A real science of consciousness will not emerge until we can go beyond the current norms of scientific inquiry, which value prediction and explanation over understanding.

What would such a science look like? For one, it will have to start from something besides objective measurements (which are signatures of the things being measured after all). At the very least, we would have to record subjective and objective measurements simultaneously. In the emotion case, one would have to record both objective measurements (like the extent to which your eyebrows are raised and your lips pursed) and subjective measurements (reports of how angry or sad you feel). A real science of consciousness will take subjective and objective data as its starting point. Once it does that, both aspects of the hard problem of consciousness become amenable to investigation. Instead of asking “how come there is such a thing as the taste of an apple in a world of objective facts?” we will investigate the relationship between the objective and the subjective aspects of being an apple simultaneously. To conclude, its only our metaphysical bias towards “objectivity” that keeps us from doing scientific investigations of consciousness.

Startupman

August 28, 2015

Until about 2000, perhaps even 2005, the term genius was used for artists, scientists and philosophers. I think of Andrew Wiles staring at sheets of paper in his Princeton office as he contemplated Fermat’s last theorem. Or Miyazaki spiriting us away into a magical kingdom. Steve Jobs and Bill Gates were smart, talented and very very rich but they weren’t geniuses. 

Not anymore. There’s absolutely nothing that entrepreneurs can’t become if they set their minds to it. They can be creative, they can be wise, they can fix the climate and end poverty and at the end of it all count their billions in their Palo Alto garages. Like so many other words before them, creativity and wisdom have succumbed to the charms of commodification and have become creativity 2.0 and wisdom 2.0. 

While rumors of startupman have been circulating since 1998, he was first spotted on earth in 2011, after the recession had receded a tiny bit and money was flowing through Sandy Hill once again. Startup man is the universal being of our times. He is scrappy and tough. Complex engineering problems are a piece of cake for him. Most importantly, he can raise money from old white men like a hill in Boston was named after him.

Startupman’s gifts don’t stop at engineering and business; he can write novels and organize expeditions to Mars. He can meditate to end world hunger while playing the guitar. I am waiting for the startupman app. Rumor has it that Apple, Google, Facebook and Amazon are all working on one, but I bet you there’s a kid in a basement somewhere who’s going to beat them to it.   

 

Two Dimensions of Data: Newsletter #25

January 26, 2015

What was that old saw: in God we trust, everyone else bring data? Data and information are the bedrock of modern society. Money, numbers, bits; however you count the beads, it’s data everywhere. 

Yet, there’s no real understanding of data among scientists and scholars, let alone the general public. Even the experts view information from within their specialization – let’s say machine learning or information visualization – than an understanding of the science as a whole. Imagine a world in which people learned numerical simulations for space travel without learning classical mechanics. Physics is a great science because it’s basic concepts – not it’s foundations, but the concepts that all physicists need to know in order to apply their methods to problems in the world – are drilled into physicists from mechanics 101 onward. 

There are two sciences of information: computer science and statistics; both are backed by mathematical theory, but go well beyond mathematics in their real world applicability. Still, there’s a tendency to identify these subjects with their (current) mathematical foundations, i.e., the theory of computation and probability theory. A physicist would find that strange; physics is mathematical, but no physicist would confuse the foundations of physics with the foundations of mathematics. 

Until our understanding of information makes that transition, we won’t have a robust science of form. I believe that transition will require a deeper unification of computing and statistics than is on offer today and in order to do so, we will have to look at the two disciplines from a bird’s eye view first and then narrow down on important questions for unification. It’s a topic that’s beginning to concern me more and more, so I am going to use these newsletters to talk about my ideas every so often. Bear with me if you think I am going all technical. 

Let’s first note that computing and statistics bite different chunks of the information universe. Computing helps us engineer information systems – desktop, laptop and mobile computers and computer networks being the most important. Computing (and once again, let me emphasize that I care more about computer engineering than computer science) integrates information vertically, i.e., it’s about engineering information systems from logic gates all the way to iPhone apps. 

Statistics on the other hand helps us with experimentation, getting data from the world. The integration is horizontal; statisticians care about experimental designs and survey techniques; as the data is brought in for analysis, statisticians also care about techniques for crunching and visualizing the numbers.

Computing and statistics have stayed away from each other for most of their history, starting with training and ending with their typical applications. Statisticians learn continuous mathematics and most of the important applications of statistics have been in unsexy fields such as agricultural genetics and psychology. Computer scientists learn discrete mathematics and from the beginning the science and engineering has been very sexy – from it’s involvement in code breaking to the foundations of mathematics. 

The proliferation of data is the main reason the two fields are beginning to come together. In particular, we need the vertical engineering of computing systems to be driven by the horizontal flow of data. Incidentally, this is exactly what my PhD supervisor, Whitman Richards, was advocating several decades ago. He got the germ of that idea from David Marr’s work on Vision. The marriage of the vertical and the horizontal is not only interesting as engineering, it’s arguably the best way to understanding the relationship between the mind and the brain as well. Machine learning is at the forefront of the marriage of vertical information and horizontal information. I believe that merger will expand to more and more fields in the future. To be continued

Newsletter 18: The Society of Knowledge

November 29, 2014

We live in a knowledge society but we don’t have a universal class of knowledge professionals. Every profession deemed universal is represented throughout society. Doctors ply their wares in rural clinics, small town hospitals and the Harvard Medical School. Lawyers occupy the White House every four years. Engineers and architects work for the department of transport, the local real estate contractor and Google. There’s a teacher in every village.

However, we don’t find knowledge professionals anywhere besides universities, where they’re typically called professors. Even there, professors aren’t certified as knowledge professionals but as bearers of some specialized body of knowledge. There’s nothing that makes a professor into a professor; there are only professors of history and chemistry. That’s strange, for lawyers can’t be lawyers without passing the bar, engineers need to be certified and teachers need a degree in education. We mark our respect for a profession by declaring a badge that certifies entry into that profession. That certificate also universalizes the profession, so that it can take root in every nook and corner of modern society. Every startup has a CEO, a CTO and a COO. They don’t have CKOs. The ivory tower has prestige, but intellectually, it’s as much a ghetto as it’s a beacon.

You might say that a PhD is the certificate for professors. It’s partly true, but most PhD’s aren’t professors and will never be one. Most PhD’s leave the profession of professing, or worse, languish as adjunct faculty. If the certification is a signal of respectable livelihood, then a PhD is a very poor guarantee. Imagine the heartburn that would ensue if 70% of those with a law or medical degree had a position that paid close to minimum wage and no hope of getting a better job.

In any case, a PhD is a certification of specialized knowledge, not of knowledge as such. A knowledge bearer should be closer to a philosopher, a practical philosopher, than a possessor of arcane information. Socrates thought his role was to be the midwife of wisdom. I believe that role is far more important today than it was in Athens in 399 BCE. We are deluged by information on the one hand and plagued by uncertainty about the future on the other. The information deluge and uncertainty aren’t unrelated; the world is changing quickly, which leads to more information – both signal and noise – and more uncertainty.

In times of knowledge scarcity, knowledge professions are gate keepers to access – which is why we have priesthoods and ivory towers. We have moved far from those times. Knowledge is no longer about access but about value: what trends are important and what are fads? What’s worth learning and why? In the future, every individual, every company and every society will rise or fall on the basis of it’s understanding of value. We need a new category of professionals who will act as weather vanes for the new winds that are blowing; people who understand data making and meaning making. They shouldn’t be content with being midwives of wisdom. Instead they should boldly go where no one has gone before and take us with them. 

Newsletter 17: Communicating Knowledge

November 23, 2014

I have been thinking about knowledge and collaboration for a long time, for it greatly affects my own life as an scholar and researcher. The open source movement didn’t invent collaboration; academics were collaborating freely – both as in beer and as in freedom – before software engineers. After all, professional engineers work on products that are bought and sold, while academics (in principle, if not in practice) share their wisdom in return for society’s generosity in funding their exploration.

In practice, software engineers collaborate a lot more and a lot more freely than academics do. Wherever you look, the situation is better in industry with all the cut-throat competition than in academia, with its public charter. Some of it is because academia is actually a lot more cut-throat than industry – there are fewer jobs and there’s less money. Further, unlike an industry professional who can sell expensive widgets for a living, an academic only has their data and their content to flog to the world. The sociology and the economics of academia is well understood now and I will remain silent on this issue from now on; you can always read the Chronicle of Higher Education to see the daily lamentation.

Let me talk about a structural issue instead. You might have heard of the famous slogan: “the medium is the message.” In other words, the means through which you communicate influences the content of your communication. TV news is not the same as newspaper news. For the same reason, academic collaboration isn’t the same as software collaboration. Software collaboration – mostly done via version control systems – is real time, ongoing and continuous. The time cycle is in the order of hours, if not minutes. The technologies that support collaboration are more or less instantaneous: you run a git push origin master and your collaborator has your contribution in front of them.

Academic writing is a lot slower. Its collaboration technology is built around citations, responses and feedback that have a time cycle of months or more. Worked well in the seventeenth century; now, not so well. It’s true that you can write your scientific paper on a Google doc and see your collaborators’ response in real time. That’s missing the point – collaborating on an office document has none of the language and ritual of paper writing. Every element of a scientific article, from the abstract to the introduction, the citations, the data, the discussion, the conclusion and the references, is designed (unconsciously, as a result of a slow evolution over centuries) to address a single problem: how can I communicate my work to a community that lives far away from me and doesn’t have access to my mind or my lab? It’s that mental organization that has enabled a scholarly edifice, built on top of each other’s work. Unfortunately, that design has a half-life of months.

We now expect instant feedback from our communication systems – wherever you look from phone and Skype to SMS, Whatsapp and Facebook messenger, people are used to ongoing, real-time conversation across the world. When I first came to the US in the early nineties, I was still writing letters by hand to my friends and family. Most of them didn’t have a phone or a computer. I would write a letter, post it and then wait for a month or so before I received a reply. In a couple of years, we had all switched to email. It’s true that the handwritten letter had an emotional impact that an email can never have, but for most purposes we don’t need that handwritten note. Certainly not in an academic setting. Scholarly collaboration needs to reflect this new cognitive landscape. A revolution in knowledge needs a revolution in communication.