# TAS for Chomsky

Continuing the thoughts around expressing Chomsky’s ethical opinion regarding US military actions as expressed by the documentary that I watched during 2016-2017 break.

It became obvious that while propositional logic and set theory are comforting languages, they are not the most convenient when operationalizing an ethic. For example, there seem to be a need to distinguish these prescriptive targets: must do, must not do, may do, may not do.

$\detokenize{must_do_actions}\subseteq \detokenize{may_do_actions}$

$\detokenize{must_not_do} = \detokenize{may_not_do}$

It seems “may not do” is “not may do” due to common English use even though technically both expression should be subset relation. I.e. “You may not smoke” states “you must not smoke” instead of “you can choose not to smoke.” Although that imprecision is inconsequential to the current discussions.

So in fact the expression can be simplified by including do_not_do_ actions for each action in simple TAS. We must also impose a contemporaneous interpretation to a $\in TAS$ as to mean a is an action viewed at some references moment, ostensibly now.

Then, the predicate $must\_do(a, p1, p2)$, reads “p1 must do a to p2” is expressed as $ethical(a, p1, p2) \land \neg ethical(do\_not\_do\_a, p1, p2)$. $may\_do$ is simply $ethical$.

$may\_not\_do$ is $\neg ethical$
The need to restrict preposition to single moments in time is in retrospect necessary. All preposition can be sub-indexed with reference time: prescriptive proposition $ethical_t$ means it is ethical at time $t$, while descriptive preposition $do_t$ means something is done at time t. $do_w(A,ally, axis) \implies ethical_u(A, axis, ally) \forall{w,u} \in WWII$

That’s a mouthful. But at least we can avoid the pitfalls such as the “may not” fiasco we have in English.
(Disclaimer, I watched a thirdprty documentary about Chomsky. Some videoed statements were stitched together and I watched that. doesn’t mean I am writing to explain what he actually said or meant.)

# Equality of Benefit

I’ve been involved in a lot of discussion around bias, equality and fairness regarding algorithmic decision making. Without going into excessive amount of background and detail the gist of my believe at the current moment is that equality of utility is the safest thing for companies to aspire to.

What is equality of utility? Let’s degenerate into binary decision making: given individual x, who has observable features f(x) and protected feature p(x). Suppose the company has to choose among two actions to take {a,b}. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties p?

Let god bestow us, a neutral third party, with a utility functor u whose evaluation on the individual u(x) results in a function u(x)(a) is the utility of company taking action a to individual x, u(x)(b) is the utility to individual x of company taking action b.

Let g be the decision process of company, g(•) is the decision company makes either a or b for the situation. Then the right thing to do

g(f(x)) = argmax_{i\in{a,b}}(u(x)(i)) = g(f(x), p(x))

Simple, we do as god says is best for the customer, act as if we have the knowledge of an oracle–even when we know of some reason for discrimination.

# The sweet spot between CSS and OSS

I’ve been hacking on a commercial database at work recently. Spent a good week of time querying the database for a good sample of a decade of the company’s historical data. The thing that really kills me is this 15 minute query sniper that has been around since before I joined the company–it exist on all data platforms we have ever had: MySQL, postgresql, hive, vertica, spark, Hadoop.

But in an ironic way, I am actually really glad that all this crap works at all! How recent was it that MySQL would just get stuck or run out of memory or some other hidden unknown problem? How recent was it when hive crashed? How recent was it when you had to query hbase for data? But all that is under control because the company managed to pay for a product that actually works. system-v (anonymize to protect the company) can actually handle the work load that we have doing funky 5-level deep subqueries, multiple mixed inner and outer joins, filters group-by’s, aggregations, string operations, and it never peeped a single complaint! Just runs until query is killed. Production etl was not impacted. Other analysts didn’t complain. All I got was when the query got big, it was killed.

Of course this two dozen person team, director level senior management and all, and all those servers and licensing fee, and all those training classes, and maybe a few quarters of ramp up is more expensive than the two weeks I spent bringing up a spark cluster on an HP laptop and Dell server. My cluster, btw handled the same sized query fine on spark. I only had to upgrade the disks slightly from factory default.

What this illustrates is that closed source software is catching up with Open source software! This is the sweet spot where closed source is at parity in performance and feature when compared with OSS lacking only cheap install and maintenance. Everything works every where. All you choose is your price and reliability. This is where software should stay for ever! Any geek can code up any new algo in matter of seconds. Test it, launch it in the next release. Beat the CSS by a quarter or two to the market. Companies that chose not to use OSS has to wait the few quarters, but that is a choice that they now have! CSS actually must keep up with OSS to stay afloat. OSS is no longer the only choice for real features, performance and non-stupid implementations.

Competition is so awesome for consumers!

# “Everybody Lies”

I’ll be posting about this book by Seth Stephens-Davidowitz in a year or two. It’s an interesting book for its time, May of 2017. It is brutally honest, without reservation for political correctness, but hard to distinguish stated facts and opinions, hard to distinguish statistically qualifiable statements and idle observations(and author tries to mark the lack of rigor, imho, by saying he doesn’t really use properties of the vodka test (Kolmogorov–Smirnov test) or otherwise his meaning regarding repeated mention of the vodka test is completely over my head! what? is it a useless test or what? That’s really old news that statistical tests are sometimes useless in the face of big data due to simple algorithms working really well. I also look forward to Gelman’s begrudged review and discussions about statistical aspects of his work)

The book probably does the most to make me want to see his data and analysis. To check if his conclusions that (white) Americans are very consciously and obnoxiously racist(oh thank goodness, I thought I was so unfit for this modern society with my own frequent racist thoughts)…that interest in homosexuality is geographically uniformly distributed and limited to 5% minority population, that… Indian men often want to drink wives’ milk, …that all these hulking men and bobbing chicks in America are humping at rate less than half a dozen each fortnight, …that…….

Well let me finish reading the book and learn a bit more and post back in a few years.
And it would be really nice if this ability to look into humanity is preserved some how.

# Reimagining Bad AI–what will humanity lose?

AlphaGo just beat the Chinese at 围棋。All the games, single person, 5 people, never came close. The Chinese team were gracious and kept up good spirit of collaboration and learning. Although, seriously, the Champ seems to speak his mind and probably that of many other people: It really sucks to lose, it doesn’t feel good, and who wants to play against an AI ?

There is some joy, I suppose for some people, to watch these pompous Chinese play their most treasured games and get beat badly. For some people, it is a disgrace that they just have to swallow whole. It is a game where you have the worlds best bang his head against an AI for 4 hours and just lose. Like these passionate people who are at top of their profession, giving it a real go, just lost to the computer. And then the computer pulled a Chinese move and quite professional competition at top of its game! Wow!

So, I guess AI is really closer than we thought. Seeing top pro of one competition decimated by AI immediately brings to mind all the other people who are also passionate and top of their game in a similar situation at their own games. Thinking of drivers, programmers, chefs, teachers, scientists, artists, … leaders. Will they all compete and lose to the AI in such heartbreaking defeat?

I used to make light of the situations and say that that AI has no chance against Hitler’s of the world. It cannot possess enough evil to out evil human kind. But after the last AlphaGo win, I do not even believe that. Think of the next allied forces banging their heads against an AI Hitler, then losing humanity to it.

Evil perhaps is not the only thing that begets evil–perhaps knowledge begets evil too?

Just think of your passionate and top-of-their-game financial advisor fighting against a machine. What about your political leader fighting against a machine? What about artists? The reality is setting in. What incentive will human have to do any of these things well? Why would a human improve himself in any thing if he can make a computer better at it with less human effort and sacrifice?

Sure there is the counter argument that people run much faster marathons than the Olympians and that survived horses, boats, planes, segways and automobiles. But in reality, running survives because it is a nonessential and mot-really-competitive sport. Like, the scaling of the whole jogging-marathon thing, it doesn’t really scale. Not running has no impact on most people, and running isn’t what makes most people happy. And running usually produces more happiness standalone than winning at competition.

It seems, in the impending age of AI, we can still try to make some predictions about what will be eliminated and what will stay. If a human is making a rational decision then things that will stay are those that we need directly for physiological, psychological and social reasons.

# It’s an Open Relationship

Here is my canned answer to all recruiter emails on linked in or email:

It’s not very polite, or that effective, but I simply cannot rely to all inquiries in a timely fashion. Hopefully I can be polite, fair to my self, my conscience, the market and still make a living…

When this actually posts, we’ll know what it does to me.

# Re: Chomsky documentary

…. will continue the thoughts around this from previous post…
I’m watching a documentary about Chomsky during 2016 winter break. Around 29:00 he talks about how to deal with Kurds: chemical warfare was the most advanced thing they had and they felt it righteous to use it against enemy. It makes me wonder what we use today on civilians. Genetic mutations? (To become dependent on other races?) Micro-nannites that does bad stuff to the body ? Subliminal messaging to cause most embarrassing and economically destructive errors. What are our most advanced weapons? It has to be money right? Capital weaponry that destroy other countries. One wonders what most advanced weaponry we have today? Perhaps this evil, necessary or not, will help us think about this whole AI thing. Oh and there is another one, the halting thought is one important mental agent that can be used against people!
The framework under which we can discuss the terrorism Chomsky describes is the TAS framework(Transitive Action Spaces framework). He points out that a set of actions under nation state TAS against other nation state (bombing, assassination, spying, many means of civilian killing) are all terrorism, or war crimes, no matter who fills the valences of an action in the terrorism set. We, those endowed with standard human intelligence, tend not to think in TAS, and even if we do, we assign bias for our own nation state or cause.
I can imagine myself believing that some actions in TAS whose admission into an ethic has additional restrictions on properties of its valances. For example:
abduct_president(country A, country B)
Is pretty terrorizing, right? We can add some kind of property restriction such as
Number_of_soldiers_in_current_conflict(A) < Number_of_soldiers_in_current_conflict(B)

Then the action abduct_president is admitted as ethical. Interestingly, Chomsky points out that the rules used to evaluate ethicality of actions at the Nuremberg trials was this:
Actions_taken(winners) \in Ethical_actions
Actions_taken(losers) – Actions_taken(winners) \in Unethical_actions
Anything losers did winners didn’t do is considered illegal at the world court. This imposition, maximizes winners freedoms during the war and minimizes losers freedom to act. (Here freedom means available ethical actions) When the eventual winner acts according to the silver rule he will conquer moral high ground(or alternatively he gets to punish loser on all actions he does not desire). The eventual loser maximizes his freedom when he does everything that the eventual winner does(an eye for an eye tit-for-tat) Because it becomes ethical and he is not punished. (Assuming winner and loser has the same desires)
Of course it is unfair. But at least it’s not winner takes all. Another completely amazing fact Chomsky points out is that “_in_the_current_conflict” was a necessary suffix to all propositions as the rules changed in the next conflict in a different theatre, the US took some actions against Vietnamese that Germans who took these actions were convicted for wars against humanities-AFTER their conviction at the world court.
So, it would seem that TAS is a rather easy system to use to discuss matters of ethics after all. The accepted ethic seem episodic, one per major conflict. The ethic encompasses a TAS but can be parameter used by properties of parameters of TAS(such as population) and it can depend on what has happened in the current conflict. What is ethical may depend on what other people do, not just what you and other people want.
And I should reiterate that Chomsky feel this an ethic should be universal. That may mean that when you are the winner and I am not that the same unfair rule applies to me. This seems tautological as the loser has no choice, but it needs to be stated for completeness. I think a more restrictive universality would stipulate that winning and losing should not affect whether or not the subject of the action is winner or winning.
Reasoning sounds sound, why is he controversial? What is he being challenged on? “Terrorism is bad no matter who does it, and US like all other dominating world power does it” his statement seems right to me!? If those who do it feel justified, it needn’t be hidden, it can stand under the light of reason, right? Would be curious to know what the fuss is about.

# What are we threatened by

I am well behind the times. Watching first season of Silicon Valley while they shoot the 4th. It would seem that one major threat we have in the Silicon Valley, aside from all those smart kids barely not teenagers with the IQ of Einstein take our pie, the foreign nationals stealing all our algorithms and privacy, large corporate take overs, AI’s, and now, there is now a new threat!

The threat of mockery from Hollowwood! They will taunt your project that you’ve secretly schemed for years and years developing in your garage. They’ll make it into a freaking tv show except with hotter chicks and a better looking you.

Sigh….

# Data Harvesting

I’ve been thinking about the book “Everybody lies…” One thing that the author uses a lot is data from all different sources. I guess it is to be recognized when some one does something good even if that work is based mostly on diligent products of many other people–in this case data of all sorts.

Looking at recent kaggle competitions, it also seems that companies are starting to notice this. Some competitions, such as the zillow \$1mm competition, not only does not prevent competitors from using outside data, it encourages them to use new data source.

That’s very interesting. This kaggle competition not only encourages competition in model building, but also encourages data harvesting–finding and using mature but previously unused data.

This may very well continue for some time yet as we find new ways to treat more and more objects and information as data.

What will be harvested next?

# Chomsky

I’m watching a documentary about Chomsky during 2016 winter break. Around 29:00 he talks about how to deal with Kurds: chemical warfare was the most advanced thing they had and they felt it righteous to use it against enemy. It makes me wonder what we use today on civilians. Genetic mutations? (To become dependent on other races?) Micro-nannites that does bad stuff to the body ? Subliminal messaging to cause most embarrassing and economically destructive errors. What are our most advanced weapons? It has to be money right? Capital weaponry that destroy other countries. One wonders what most advanced weaponry we have today? Perhaps this evil, necessary or not, will help us think about this whole AI thing. Oh and there is another one, the halting thought is one important mental agent that can be used against people!
The framework under which we can discuss the terrorism Chomsky describes is the TAS framework(Transitive Action Spaces framework). He points out that a set of actions under nation state TAS against other nation state (bombing, assassination, spying, many means of civilian killing) are all terrorism, or war crimes, no matter who fills the valences of an action in the terrorism set. We, those endowed with standard human intelligence, tend not to think in TAS, and even if we do, we assign bias for our own nation state or cause.
I can imagine myself believing that some actions in TAS whose admission into an ethic has additional restrictions on properties of its valances. For example:
abduct_president(country A, country B)
Is pretty terrorizing, right? We can add some kind of property restriction such as
Number_of_soldiers_in_current_conflict(A) < Number_of_soldiers_in_current_conflict(B)
Then the action abduct_president is admitted as ethical. Interestingly, Chomsky points out that the rules used to evaluate ethicality of actions at the Nuremberg trials was this:
Actions_taken(winners) \in Ethical_actions

Actions_taken(losers) – Actions_taken(winners) \in Unethical_actions
Anything losers did winners didn’t do is considered illegal at the world court. This imposition, maximizes winners freedoms during the war and minimizes losers freedom to act. (Here freedom means available ethical actions) When the eventual winner acts according to the silver rule he will conquer moral high ground(or alternatively he gets to punish loser on all actions he does not desire). The eventual loser maximizes his freedom when he does everything that the eventual winner does(an eye for an eye tit-for-tat) Because it becomes ethical and he is not punished. (Assuming winner and loser has the same desires)
Of course it is unfair. But at least it’s not winner takes all. Another completely amazing fact Chomsky points out is that “_in_the_current_conflict” was a necessary suffix to all propositions as the rules changed in the next conflict in a different theatre, the US took some actions against Vietnamese that Germans who took these actions were convicted for wars against humanities-AFTER their conviction at the world court.
So, it would seem that TAS is a rather easy system to use to discuss matters of ethics after all. The accepted ethic seem episodic, one per major conflict. The ethic encompasses a TAS but can be parameter used by properties of parameters of TAS(such as population) and it can depend on what has happened in the current conflict. What is ethical may depend on what other people do, not just what you and other people want.
And I should reiterate that Chomsky feel this an ethic should be universal. That may mean that when you are the winner and I am not that the same unfair rule applies to me. This seems tautological as the loser has no choice, but it needs to be stated for completeness. I think a more restrictive universality would stipulate that winning and losing should not affect whether or not the subject of the action is winner or winning.
Reasoning sounds sound, why is he controversial? What is he being challenged on? “Terrorism is bad no matter who does it, and US like all other dominating world power does it” his statement seems right to me!? If those who do it feel justified, it needn’t be hidden, it can stand under the light of reason, right? Would be curious to know what the fuss is about.