2018-12-07

Liulishuo Huang (流利說謊);
Or, Lying Fluently

When Bullshit Blows the Cow (吹牛)

You know, I honestly wanted to lay off blowhard programmers who pretend to be language gurus. I thought Duolingo was example enough. Luis von Ahn showed dick for linguistic knowledge in his TEDx talk. Instead, he pulled the con: "I'm a computer genius. I have a vision. Give me money." Then, he removed the only good thing about Duolingo (the part that translated websites). Then, he sold ignorant people a shitty pile of quizzes that were built for a separate purpose. Surely, people won't fall for that again, will they?

Well, fuck my luck! Some Chinese guy copied the exact same bullshit formula. His name is Wang Yi (王翌), and he's not just any language industry fraud. No, no! He worked for Google for not even two years. A project manager, very nice! The project was about language learning, right? I'm betting not. Even if it were, it's not like he mattered to it. Product managers are middlemen between coders and corporate folk. They don't build the software. They don't even come up with the ideas. They just get products out on time and under budget. Sound like he's too focused on business to care about language learning? Seem like he probably doesn't know shit about language education? Bingo! He's yet another a snake oil salesman.

It's all here in an interview he did in March:


"So, that’s something that I’ve sort of got off the plane and started really observing the local market when we saw the need. But then, we thought, 'Okay, mobile is surging.' Pretend you’re in May of 2012 in China, you saw this karaoke app named 'Changba' really surging. At first, I thought, 'It’s stupid. Who would be singing to their cell phones?' But, apparently a lot of people did across different age groups. […]
"And I thought this built-in microphone thing is really unique. It could change people’s behavior. So, we thought, 'Okay, if they were so into singing into their cell phones, maybe they could also practice English. But then, how can we make them stick? So, we thought, Angry Bird [sic] was very popular at the time. Can we gamify it? What would be the key elements of gamification that we could use? Instant feedback. So, what kind of instant feedback? Maybe some feedback on their pronunciation to keep their score, to give them some indication of how good their pronunciation is.' At that time, I called my college buddy Lin Hui who was a research scientist at Google in Mountain View back then, specializing in speech recognition and data mining. I’m like, 'Hey, can you do this?' He said, 'Yeah, of course.' And, I’m like, 'Okay, so why don’t we do something together?'" -- Wang Yi, a program manager at Google for 22 months
Did you catch that? He got the idea from a goddamned karaoke app! He didn't even question himself. He didn't go, "Maybe I should check some literature. I should see if my idea makes any fucking sense." Nope, he shit it out, called it gold, and then called Lin Hui to do the actual work.

Also, in case you were wondering, that idea is shit. Implicit corrective feedback is okay. There's a problem, though. Liulishuo's design doesn't provide corrective feedback. It just judges your output as it pushes you along. That's Wang Yi's "gamification". The came is called, "Guess What My Database Wants To Hear". Confusing report cards with corrective feedback only proves that Wang Yi and Lin Hui are Chinese. Calling report card generation a "game"? That just proves that they don't know what fun is.

Meh, he spun his bio and didn't do any real research. Big deal!

Yeah, what matters is if Liulishuo actually works. Oh, wait.

His Staged Interview Showed That Liulishuo Doesn't Really Work

"She [my mother] is literal [sic] addicted to the app now, and sometimes even enlists my help when she's stuck at a certain level." -- Zara Zhang. an investment analyst at GGV Capital and a former journalist 
This is clearly an unbiased report from an objective former journalist. It couldn't be that GGV Capital got Liulishuo (LAIX) its seed money, and then got GGV podcasters to kiss Wang Yi's ass in an "interview". An objective journalist admits that her own mother gets "stuck" and needs her to make up for Liulishuo's defects. Never you mind that, though! Wang Yi went to Princeton. He worked for Google. He put AI in his app. He shot an 18 on his first golf game. Okay, one of those isn't true. He didn't really put AI in the app. "AI" is just a buzzword for stuff that's existed for decades. The only "AI" in Wang Yi's app comes from his users who grumble, "唉呀……"

More to the point, though, just listen to Wang Yi, himself. He claims that Liulishuo improves English pronunciation. Really? Let's listen to a sample:


Tell me, native English speakers, does his "pronounciation" impress you? Do you think, "Wow! He looks Chinese, but he sounds like he's from Houston!" Or, does it sound like he's using Chinese phonemes to approximate American English ones? (Answer: The latter.)

What about his grammar? Not perfect, but forgivable. He's still not mastered English's grammatical number agreements. Chinese expresses grammatical number very differently. It takes appropriate input, feedback, and experimentation to learn it. However, none of Liulishuo's exercises specifically correct it. Worse still, since "feedback" equals grading in this frog's well, he'll never get the feedback that he needs.

We can note, though, that he speaks fluidly and has a strong vocabulary. Of course, he also got his PhD from Princeton. Last I checked, Princeton's doctoral candidates write their theses in English. And, surprise, surprise! His computer science thesis was not on computational or corpus linguistics.

I will say this to his credit: There's real artistry in his con artistry. Of course, any research into his life and work shows that he's not qualified to develop a language-learning app. He's pretty fluent in English, and that's it. Big fucking whoop! I know Mexican farmers more fluent than this guy. The difference is that they can't say (and omit) things like:
  1. "I graduated from Princeton (in an unrelated field)."
  2. "I worked for Google (for 22 months)."
  3. "We have the biggest archive of (badly accented) Chinese speakers of English."
That pales in comparison to this fact:

Wang Yi Hocks His ESL Wares In China

Not Singapore, not Hong Kong, China. Why? He'd probably say it's because he's from the Mainland. But, what's more likely is that those other places check credentials. China and Taiwan are rife with bullshitters like Wang Yi. Wang Yi ironically complains about this, himself. People there blow cash on expensive, ineffective instruction. Wang Yi's response? Get them to blow cash on less expensive, ineffective instruction.

The fact is that East Asia's ESL industry is decades behind. That's reflected in virtually every for-profit ESL company there. In China and Taiwan, alone, VIPKid, Bright Scholar, Wall Street English, 51Talk, TutorABC, SayABC, Alo7, HESS, and many, many more make insultingly bad materials. Also, they treat their employees like garbage. Often, it's hard to say which reeks worse — their disgusting materials, their corporate arrogance, or their false idol worship. All of their founders, like Wang Yi, just wanted to make a quick yuan. They see that they can sell the false promise of a key competence to a naive public. All of their founders, like Wang Yi, know fuck-all about multilingualism. I know this because I've talked with many of them. I've read the bios (in English and Mandarin) of the rest. Half of them aren't even bilingual. Wang Yi is no different. His bullshit is just a different shade of brown. He pretends to have some "AI" secret to "automating" ESL instruction. He doesn't. He just knows that those words make businessmen's dicks hard these days. If East Asians become even partly aware of sensible ESL pedagogy, Wang and the rest will all be bankrupt in a year.

Liulishuo:
When you want your bad ESL ideas mashed together.
There's a reason why China's main exports are American products, raw materials, and knockoff goods. Neither China's culture nor government prize original thinking. Our Western innovation steers the majority of their economy. When products of that innovation reach them, they reverse-engineer them. Then, they build knockoffs and pawn them on people who don't know or don't care if they're fake. They treat education the same way. Wang definitely did. He saw some popular ideas, tossed them together, and called it "innovation". He then insulted our intelligence with a crap app, a clipped bio, and some big talk. To quote Dennis Miller, "To call him a scumbag would be an insult to bags of scum."

I personally can't wait for the hype to die and his business to fail. He'll have deserved nothing less. Hopefully, exposing him as a fraud will help push the inevitable.

2018-12-01

Language Without Metalanguage 4:
Visualization Over Analysis

Where "Picturing" And "Thinking" Part

When I work with students, the hardest thing to teach them is to think less. It's to teach them not to stop and not to check themselves. Output, wrong or not, needs to come out. For an easy analogy, I often quote William Forrester:



"The first key to writing is to write, not to think." -- William Forrester
This is exactly how speaking is. Fluent speech, just like fluid writing, is automatic. Sure, we pause our speech for various reasons. We don't think ever about where we should pause, though. We don't ask ourselves how fluent our speech is. We speak first. We revise our speech afterwards.

But, why do students struggle with this? Two major reasons come to mind.

The First Barrier: Perfectionism

Students are often afraid of being wrong. They want to do everything right the first time. It's part of our psychology. To be wrong is to admit deficiency. It's a sign of weakness, and we dislike vulnerability. We're conditioned to be proud of rightness and ashamed of wrongness.

Unfortunately, foreign languages are too complex. You won't speak them perfectly on your first try. Your brain has to rightly coordinate phonological, semantic, and syntactic information. Meanwhile, you don't fully know what right or wrong speech is. You just know that you don't know. Worse yet, you have to risk being foolish to remove all doubt.

This is one likely reason why toddlers progress in natural languages faster. They aren't conditioned to feel shame in being wrong. Also, they're virtually deaf to direct grammatical correction:
"The evidence from the experimental language acquisition literature is very clear: Parents, despite their best intentions, do not, for the most part, correct ungrammatical utterances by their children."
However, that barrier is more more easily overcome. It just takes some humility and some thick skin. Drop your pride and take criticism lightly, and you'll be fine. Besides, the second barrier is much more troublesome.

The Second Barrier: Analytical Bias

Starting your first year in school, you're coached in analysis. You're taught to examine facts. You're taught to dissect complexity. You're taught to compose reasoned thoughts. You're taught to cite references. You're taught to fit the institutional ideal. From stickers on worksheets to high GRE scores, people mainly judge your intelligence on this skill. Yet, it's just one domain of mental activity.

你們都是幹錯的!
Such emphases on analysis impart a cognitive bias, as well. In psychology, it's called "the law of the hammer". Analytical hammer in hand, we'll try to pound every foreign subject. What's more, natural language looks just like a nail for it. It's bound by rules. It has guidelines. Experts can judge it as right or wrong. We can pinpoint errors. But, what good, deep analysis of language acquisition shows is that this is misguided.

Analysis of language is like analysis of music. Sure, elements of language are its terms, its phrasal organization, its agreement rules, and such. Elements of music are its notes, tempos, and such, too. But, just like music is not the application of music theory, neither is language the application of a language theory. Theory comes later to explain what arises naturally. We can hum tunes and speak just fine without theory. The idea that, like learning a new genre of music, learning a new language requires this theoretical knowledge, is just plain false. The facts are in. It does more harm than good. Even its advocates only support "judicious" and "developmentally ready" uses of it.

If analysis doesn't help, then what does?

What helps language acquisition is a method that is informed by sound theory. What helps more is to remember that this doesn't imply teaching the theory to you. You don't need to become a biochemist for antibiotics to work. Likewise, you don't need to become a linguist to learn foreign languages.

To answer you more directly, I'll analyze analysis for you! What are some key features to analysis? What features are its opposites? Answering these questions, we find something anti-analytic, like so:
  • Analysis is taxonomic (about members and sets).
    Therefore, a sound method must be meronomic (about parts and wholes).
  • Analysis is computational (about derivations from rules).
    Therefore, a sound method must be creative (about creations without set rules).
  • Analysis is deliberative (about organizing concepts).
    Therefore, a sound method must be autonomic (about raw observation).
There is sound research in SLA and in "literacy therapies" (for ASD and dyslexia sufferers) that yields such a method. It's often called "visualization". The easiest way to summarize visualization is with two words — "directed imagination". The headword, "imagination", takes its etymological meaning. Imagination "makes an image" for us. All normal humans have this capacity. We can dream vividly. We can picture hypothetical scenarios. We can even recall memories. If we can see, these images are mainly visual, and our sound method will exploit this fact. Second is being "directed". Visualization isn't just random flashes of nonsense. Our images create meaningful, sequential scenes. Each of us gets a front-row seat in our own Cartesian theater. Again, our sound method will exploit this part of ourselves.

Go to Dan Dennett for the analysis.
One more thing we must consider is this: Visualization is pre-linguistic. That is, before we ever had words, we had the ability to direct our imaginations. Babies and dogs dream. If they had no such abilities, basic recognition (of mothers or masters) would be impossible. The corollary?  Materials in a sound method have to be pre-linguistic. A learning session demands structure. That structure, though, can't force a specific form of language. It also can't encourage parroting already heard language. What causes both of those is the presence of linguistic input. Learners are too tempted to copy or paraphrase what they immediately hear or read. We're too tempted to accelerate past or skip the visualization and go straight to language. Remember, though, we want to create language (express thoughts individually) beyond just producing it (forming sentences). That, in turn, demands focus on our pre-linguistic state.

Such materials exist, but are not in language textbooks. Instead, they're in…

Dialogue-Free Media

This Buni Comic is a good example:

A comic is worth 5,000 words.
Now, the method I use takes this story piece by piece. I ask students questions about the image. My questions' order is based on orders in logic, and they proceed like this:

  • Zero-Order Questions (Characters and Objects)
    • "What do you see?"
    • "Who is there?"
    • "What is that?"
  • First-Order Questions (Actions and States)
    • "What is that first rover doing?"
    • "Can you describe the parachute?"
    • "What does the second rover look like?"
    • "How the first rover feel about 'her'?"
  • Second-Order Questions (Action in Setting)
    • "Where do these rovers meet?"
    • "When did the second rover land?"
    • "How many times does 'he' drive around 'her'?"
  • Third-Order Questions (Transitions)
    • "How did the second rover land on Mars?"
    • "Why are there hearts around 'him'?"
    • "What is 'he' driving around 'her' for?"
  • Fourth-Order Questions (Opinion and Conjecture)
    • "What do you recommend that the rover do to win 'her' love?"
    • "How would things have unfolded if the second rover were 'male'?"
In most sessions, first-order questions lead to first- and second-order responses, second-order questions lead to second- and third-order responses, etc. Either way, I just follow their level. I just need to be sure that my questions can be answered. Fourth-order questions allow for more liberty in the responses. There, coherence matters more than truth. Along the way, I give complete, native sentences reflecting what they said. Part by part, the learner describes the whole.

I then remove the images. They must visualize their summary. They're not reciting it. They're creating it. They're saying what they're confident they can say. When they're stuck, I ask a question to help them recall their images. Then, at the very end, they get a transcript of a corrected summary. That's their input. The learners don't need vocabulary drills. They don't need a grammar lecture. They know what they said. They just then see how to say things more clearly.

Finally, above all else, I remind myself:

Watch Those Eyebrows!

Thinking hard.
Hardly thinking.
Analysts have obvious tells. They furrow their brows and focus their gazes.

Visualizers have opposite tells. They raise their eyebrows and look askance.

Of course, some people are just mean-looking, so a baseline is important. Once you find it, though, you must switch that habit. It's frustrating at first. Some learners don't feel like they're learning unless they're deep in thought. Others assume it makes no difference, and so do what's habitual. That's where William Forrester's talk with Jamal is again relevant:
William: Is there a problem?
Jamal: No, I'm just thinking.
William: No, no, no. No thinking. That comes later.
Thinking comes when you can teach yourself. You just need to save your analysis for the end. If you're not sure if you're ready, try The Pink Panther in your target language. 100% confidence in 95% of your output, that's your goal. Anything less, and you'll need a native to guide you.


2018-11-11

Duolingo: A 1930's Method for a 2030's Platform

TL;DR: It Sucks

Duolingo sucks. It's the product of pure programmers trying to be educators. There's no innovative idea. There's no modernly researched approach. There's just naivety. And naive programmers do what they always do:
  • What seems intuitive to them, or
  • What others have already done, so long as it's easy to code.
Now, my series against top-down approaches should make this clear: Programmers' intuitions about language create crappy models for natural-language learning. To explain why, I'll start with a little bit about me:

As I've mentioned before, I trained for years to become a logician. That means I learned the same intuitions programmers apply. I spent years becoming familiar and competent with such top-down definitions and rules. But, I also learned how they restrict admissible natural language for limited, formal purposes. Relatively few logicians work on expanding logical expressiveness, anymore. Montague was probably the last major logician who did. However, Montague's approach turns off most programmers. They're perhaps right to not like it. It's not an elegant solution. Programmers don't want a programming language that's as complex as the language they want to model. Well, too bad for them. That's the deal. If a natural language is higher-order, only a higher-order logic will capture it.

So, these pure programmers are stuck in AI purgatory. The correct leaps go against their likely intuitions, and their intuitive leaps go against what's likely correct. And, it's obvious that Luis von Ahn and Severin Hacker were programmers first and applied linguists a distant, distant, distant second.

I get it, Duolingo programmers. It's hard. You nearly have saw your leg off to escape it. Your created worlds are so elegant. Then, some Italian flips you off, and it fucks up your whole universe:

Una salida elegante para von Ahn...
"Wittgenstein was insisting that a proposition and that which it describes must have the same 'logical form', the same 'logical multiplicity'. Sraffa made a gesture, familiar to Neapolitans as meaning something like disgust or contempt, of brushing the underneath of his chin with an outward sweep of the finger-tips of one hand. And he asked: 'What is the logical form of that?'"


Okay, so they'll hire some linguists, and we'll get a better solution, right?

You'd think so. You'd hope so. So far, though, no. It's clear at this point that Duolingo has been institutionalized. Even if the best SLA experts at their company promote a paradigm shift, it will probably never happen.

This is because of the designers' second naivety. They didn't come up with any original pedagogy. They didn't even research modern language pedagogy well. Instead, they stood on the shoulders of midgets.

Anyone who's read a foreign-language textbook since the 1930's recognizes Duolingo's structure. It's a bastardization of the situational and structural syllabus (SSS), and we've long known that there are shitloads of problems with it.

SSS Treats Humans Like Programmable Devices

An SSS is a drill-and-kill approach. It has you practice one grammatical structure or feature, 20 or so vocabulary terms, and a few key phrases. Then, the rest of the syllabus assumes or ignores it. That's only a sane strategy if you're a computer. Real humans have flimsy memories. They need reinforcement, but not blind repetition. Writers of SSS materials rarely address this issue. (They're often not paid enough to give a shit.) Yet, even when they do…

Every SSS Is Full of Prescriptive, Unnatural Language

When SSS developers create their materials, they usually follow this protocol:
  1. Pick some situational keywords or grammar features.
  2. Shove the elements of (1) them into a situational dialogue or narrative.
  3. Build exercises around the unnatural product of (2).
Just think from your own life. Imagine writing a short story. It's a good, interesting story. Then, some asshole comes to you and says, "Hey, make sure the word 'falafel' is in there at least three times Also, it needs to contain at least four sentences with prepositional-phrase complements." The correct response, of course, is, "What? Why? Who the hell cares if that stuff is there or not!" One correct answer: editors of language-learning magazines and textbooks. Another correct answer: retards.

Language learners recognize how fake and stupid SSS materials are. They endure them, though, for their  perceived benefits. They also have few other choices in texts and apps.

But, it's not like they have to do things this way. Real language is already out there. It's just that SSS course developers are too lazy or stupid for corpus linguistics. Plus, it's cheaper for them to make stuff up than to look stuff up. Sadly, that cost-cutting, top-down design also means that…

Every SSS Treats Language Use Like Subroutines

Here is an artist's rendering of ideal Duolingo users.
An SSS doesn't even treat human behavior as dynamic. To an SSS developer, we're all bland creatures of habit. We all fly to some foreign country, book a hotel room there, eat in its restaurants, visit some tourist traps, and engage in vapid small talk. The creators of SSS weren't Parisian. With that level of condescension to our humanity, though, they might as well have been.

Again, it's totally ass-backwards. It doesn't even prioritize situations by urgency. Take this example:
  1. Emergencies: visiting hospitals, reporting crimes, warning bystanders, etc.
  2. Daily survival: getting food, finding places, arranging shelter, doing transactions, etc.
  3. Personal sanity: expressing feelings, socializing, describing personally relevant things, etc.
  4. Self-improvement: gaining knowledge, power, wealth, etc.
I just made that up, and it's better than 99% of situational organizations. To make it, all I had to do was ask, "What situations demand language use?" not,  "What situations are most common?" However, even then, it's inadequate. There's no individuality, spontaneity, or unexpected element. It's a pre-planned, phrasebook attempt to impart language. Machine translators, medical interpretation, and more killed off phrasebook demand. It should have killed off the SSS approach, as well.

This doesn't sound quite like Duolingo's syllabus.

"It's alive!"
That's because Duolingo's programmers changed the SSS protocol to make programming it easier. That is to say, they took what's broken, and then added their broken perspectives. Their biggest change? They cut out narratives and dialogues. You know, the part of the SSS model that provides meaningful context? Yeah, that's not easy to code. So, instead, they created "Frankenstein sentences" — syntactic skeletons with random lexical transplants. Unfortunately, they didn't implant a brain. The result is a bunch of weird sentences in Duolingo's modules.

Hell, their Chinese practice sentences don't even separate words correctly! Did I say a distant second? I take that back. Some people are linguists, some people are not linguists, and some people are not even linguists. You can guess where Duolingo's creators and designers fall.

Worse still is where they sit. They're too big for the necessary, radical changes. More than that, they seem unmotivated to rebuild their busted vessel.

That brings me to my biggest gripe with Duolingo:

It's Nothing But a Busted Language Quiz!

It's incorrect to call what Duolingo offers "lessons". Lessons imply teaching. Duolingo doesn't teach anything (at least, not for free). What it offers are scaffolded piles of quizzes. It doesn't monitor your accuracy as you work. It doesn't give you pointers as you stray off course. It doesn't review your submissions with any nuance. Duolingo is not a platform in which you are taught. It is a platform on which you are judged. To Duolingo, your work receives one of four judgments:
  1. It's right (i.e., what we expect).
  2. It's right, but not what we expected.
  3. It's wrong, but forgivable.
  4. It's wrong.
Since (II) and (IV) piss users off the most, I'll keep my gripes to them.

Let's start with (IV). Have a look at the image below. Imagine it's your quiz. How does it make you feel?
"Follow my way, and my way only!" -- Anonymous Prick
Wronged? I'm going to guess wronged. That's how it feels to be fully bilingual, and then to be told by Duolingo, "Well, our shitty program expected a different result, even though yours is correct, so fuck you! Minus one heart." It just gets worse from there.

Me: "What if a dozen native speakers take this 'lesson' and give you the same translation?"

Duolingo: "Fuck them! Minus one heart."

Me: "What if I correct your sentence, and then get support from natives on your own forum?"

Duolingo: "Fuck you! Minus one heart. I mean, unless we approve your correct translation days later. Then, sorry for the 'fuck you'. As a token of thanks, here's some spam."

Me: "Can you at least give us the L2 audio? That would be fair, given how often you're wrong when you say we're wrong."

Duolingo: "No! How else are we going to force learners and bilinguals to correct us?"

Now you know how the data for (II) is built. It should be the Duolingo motto: "Spammed if you do, slammed if you don't." If we generalize from this, it points out my outrage at (II). Duolingo basically un-shits itself on the backs of actual bilinguals. You see, when a new Duolingo beta release comes out, lots of bilinguals flock to it. As with all of their beta releases, it's godawful. We make good-faith efforts to fix their mistakes. They "approve" our translations, but they give us no credit. One by one, we lose patience with Duolingo. Most of us abandon it. Some of us tell everyone else to stay away from it. We roll our eyes when new learners tell us they're using it.

"But, it's free!" yell the ignorant masses. First of all, just because there's no up-front cost doesn't mean that it's free. These users don't reason through the opportunity costs. There are hundreds of things you could instead be doing. Dozens of those things are better for language acquisition than Duolingo. You can do better than to be judged by incompetent judges.

If it's so bad, then why do so many people use it?

I'm afraid that's a question for another post. One thing should be clear, though. I'm not disputing its popularity. I'm criticizing its misguided approach and design. I'm blasting against every computer scientist who thinks that he's a language guru eo ipso. Programming languages aren't natural. They're artificial. Experts in artificial languages shouldn't pretend to be SLA experts. That's what von Ahn and Hacker did. That's why their product is such a shit pile. The millions who take whiffs and bites of it do not change that.

Hey, Duolingo, iss meinen Arsch!
Or, maybe Duolingo is a language app for masochists. That would make more sense.

2018-10-19

Language Without Metalanguage 3:
Discoveries Over Categories

Cases, and Aspects, and Moods, Oh, My!

Four steps, two rules. That is the essence of message parsing. But, as my title indicates, there are complications. So, sorry fam, but this is going to be fairly dry. In fact, unless you plan to use message parsing to teach languages or engage them at a higher (C1-minimum) level, just skip this one. This post merely closes some gaps that language experts will whine are open.

I'm not going to explain cases, aspects, and moods here. I'll just show some examples and work through them. If a sentence in some language seems impossible to message-parse, leave a comment, and I'll make a post or video on it.

What's the issue with cases?

The perfect seaman gift!
Message parsing breaks sentences into eight (or maybe nine) atomic sentences, not counting quantifiers. However, to some who see this list, it seems incomplete. Some sentences look like they require more arguments, and they look atomic. Here's one:
  • "I gave the seaman a gift."
If "the seaman" can't be removed, we introduce more atomic sentences. But, there's a kind of minimal functional completeness that learners should retain. Arbitrarily adding atomic sentences extends that minimum. That then makes message parsing less practical.

Most important, though, is that we would not see that these sentences mean the same thing:
  • "I gave the seaman a gift." ⇨ ADD and MOVE ⇨ "I gave a gift to the seaman."
That MOVE may not be available, though. Take German. Its dative structures appear with special determiners. If we can't ADD and MOVE them out of the way, we can still CUT them out:
  • "Ich gab dem Seemann ein Geschenk." ⇨ CUT ⇨ "Ich gab ein Geschenk."
We might also spot added arguments with morphemes. Take Russian and its case suffixes. We can CUT their extra dative arguments out, too:
  • "Я подарил матросу подарок." ⇨ CUT ⇨ "Я подарил подарок."
The main takeaway? We can't SPLIT or CUT parts off of atomic sentences. The conclusion? None of these sentences with extra arguments are atomic.

What about aspects and moods?

Aspects are less of an issue. In most cases, they require no parsing, at all.

Consider English speakers learning to use Chinese's "了". Shitty Chinese teachers say it's about tense, crappy Chinese teachers try to explain the perfective aspect, and decent Chinese teachers just show translated examples:
  • "我買車了。"
    → "I bought a car."
    → "I had bought a car."
  • “他來了。”
    → "He came."
    → "He has come."
    → "He is coming (now)."
Examples clearly beat parsing here. Also, trying to isolate aspects into atomic sentences requires those aspects to make it. That leads to an infinite regression. It's just one of those consequences of having to state linguistic metalanguage in an object language.

Moods, on the other hand, do need some explaining, since problems arise with irrealis moods. I'll stick to English here, since you'd have to know foreign irrealis structures to judge them:
  • "I would hate water if I were a sailor." ⇨ SPLIT ⇨ "I would hate water. I were a sailor."
You see the problem. Unfortunately, there's no ideal solution here. I patch message parses by marking them as non-sentences, and then transforming them into sentences with a substitution, like this:
  •  "I were a sailor." ⇨ "I could be a sailor."
A more technical way to handle it is to apply current syntax theory and to leave complement phrases as they are:
  • "I would hate water if I were a sailor." ⇨ SPLIT ⇨ "I would hate water. If I were a sailor."
Complement phrases aren't atomic, though, so you'll be back to a CUT step and a substitution, anyway.

Yeah, great, but I'm doing this for languages I don't know. Then what?

I'd recommend a language exchange partner or a bilingual friend. With them, you can verify that your parsed sentences are okay sentences. If you're learning by yourself, it's going to be much harder. It's not impossible, though, with a few pointers.

JFGI With Quotes and Asterisks

For those who don't know, JFGI means "just fucking Google it". What most Googlers don't know is that Google supports some regular expressions. So, for example, if I Google "just * Google it", I'll get these results. This matters for language learners because it can show which phrases are sentences.

Keep in mind, though, that the number of results is not very important. What's really important is the presence of results from native sources. That means that entries from Reverso, Linguee, TripAdvisor, and the like are not good indicators. Such sites archive or automatically generate translations. When the first page of results is full of such links, I jump to the sixth page. That's where I find actual uses, if they exist. If I'm message-parsing French, I'm looking for URL's like "sacrébleu.fr", a native French site for a French audience.

Also, keep in mind that rarer and longer sentences will probably have fewer results. I check pieces that are around four words in length. Too long, and there won't be any results. Too short, and the crap sites will clog the pages. Using a search engine wisely takes some practice.

Checking a Conjugator

If Googling doesn't work, I might check a conjugation guide. The Reverso Conjugator is useful for Romance languages. Learners should use them very sparingly, however. Wading through metalanguage is exactly what I want people to avoid. For most of you, if you find yourself analyzing a sentence this heavily, it's already a bad sign. Either the sentence is too difficult for you, or the sentence has a problem with it. Either way, I recommend the lazy way out.

When In Doubt, CUT It Out

CUT is the last step in message parsing for a reason: It's virtually fail-safe. You will lose parts of the narration. However, as a learner, your goal is to understand sentences. If stuff is in the way, just do as Uncle Joey says:

Good advice, Dave!
It's way simpler to Google phrases with words cut out of them. Also, you save yourself from wading into metalanguage, and that's the entire point. Learners' goals are not to know how languages work. They're to know what counts as language. Message parsing only comes in when something about a sentence's structure confuses a learner. Maybe the order doesn't make sense to them. Maybe they expect one structure, but encounter another. Whatever the case, it's not about analysis or categorization. It's about discovery. Message parsing is one route to that discovery. In my opinion, it's the route of least mental resistance. The amount of episteme (knowing-that) that most formal grammars impose gets in the way of the more important techne (knowing-how) of a language. So, it's best to make the parsing process painless, and, eventually, not to need it.

2018-09-12

Language Without Metalanguage 2:
Breadth Over Depth

Approaching Languages More Logically

Before I got into languages professionally, I trained to become a professional logician. This is why I titled this blog series as I did. The distinction between object languages (OL) and metalanguages (ML) exists in both fields. However, when it comes to using maths and logics (i.e., formal languages) or natural languages well, they're irrelevant, despite what many language instructors say.

This false belief and bad practice arises for many reasons. I've discussed economic and social reasons previously. There's yet another cognitive reason: Language instructors confuse linguistic ML's value for logical ML's value. You see, logical ML is all about questions of depth. It checks if a logic is consistent, complete, compact, decidablesound, and the like. These are important things to have in a logic. They show how far and how reliably you can use a logic's OL. However, once someone has proved an OL's worth through an ML, the rest of us can just use the OL. It's the division of labor. The nerds do the deep stuff so that we can do the broad stuff.

Linguistic ML, however, focuses on deriving all and only the well-formed formulas (WFF's) of natural languages. Logics have WFF rules, and by rules, I mean top-down definitions. These definitions say what counts as a "sentence" in a logic's OL. In fact, in logic, WFF's must be defined, or else there is no OL. This clearly isn't the case for natural languages, though. French doesn't cease to exist if we produce no WFF rules for it. Natural languages are bottom-up. Their WFF rules are in the brains of native speakers. A linguistic ML just tries to say explicitly what they are.

Now, linguistic ML might be valuable to language learners if we definitively knew the rules (we don't) and if we could program humans like machines (we can't). We can't reverse-engineer this part of our human endowment, and for various reasons. For one, it's a plain empirical fact that no one is taught their first languages. Data further suggests that we can't be taught second languages. We can learn them, but not with ML-directed instruction. Why? Because, like in logic, language learners must have an OL before any ML can make sense to them:
"This hypothesis [that we learn languages by imitation] would not account for the many instances when adults do not coach their children in language skills. Positive reinforcement doesn't seem to speed up the language acquisition process. Children do not respond to or produce metalanguage until 3 or 4, after the main portion of the grammar has been mastered."
The facts, face them!
This explains the deep stupidity among most language instructors. They pretend that a linguistic ML is as definitive logical WFF rules. Then, they force learners to retain a linguistic ML before they have a functioning object language. Why? Because that's how teachers teach logic or math. They teach the WFF rules, show some examples, and drill students with proofs or computations. Most of us can add or do deductions thanks to such instruction. With natural language instruction, though, this approach falls flat on its ass. Natural languages aren't "harder". Your language instructors are just fucking deluded. It's why 99% of people who are taught arithmetic can do arithmetic fluidly. It's why 99% of babies easily acquire their first languages (i.e., they just ignore linguistic ML). And finally, it's why about 5% of people who are taught foreign languages actually become fluent in them.

That should convince anyone to abandon any ML-directed approach. However, it leaves us in a bind. We need to understand an OL, and we can't gain it by being taught an ML. It also seems that acquisition via mere exposure (i.e., immersive learning) wanes as we age. So, if you're prepubescent, you may have a decent chance.

But, what about the rest of us?

To answer that question, we have to know a little-known trick. Formal languages borrow universal features from natural languages. In logic, though, this borrowing remains incomplete. No logic on Earth can create WFF's for every sentence of a natural language. Logical WFF's only formalize some subset of natural OL sentences. As logicians have progressed, they've created OL's that "increase logics' expressiveness." Gottlob Frege invented a more expressive first-order predicate logic after millennia of no solutions to the problem of multiple generality. CI Lewis did something similar with the first modal logic. Even still, logical WFF's are still far from matching what we can deduce in natural languages. For instance, there is still no generahylly accepted logic which can handle second-order inferences well:
  • "John runs fast," implies, "John runs."
  • But, "John does not run fast," does not imply, "John does not run." He may just run slowly.
While I and others have invented logical OL's that handle the above issue, it wouldn't help a language learner. They still wouldn't cover every natural language. Also, it wouldn't matter if they did. Replacing a linguistic ML with a logical ML would be more correct and universal, but it would require way more ML-directed instruction. If you think verb conjugation and agreement rule drills suck, imagine writing strings to capture the features of any natural language in a pristine formalism! Or, just imagine shooting yourself in the face. It's about the same.


Not all hope is lost, though, because one bit of logical ML informs an OL-driven approach. Among logical WFF's, some are atomic formulas, or "atomic sentences". There are only two things for a language learner to know about them:
  1. An OL's atomic sentences are its smallest possible sentences.
  2. All other sentences in a language are connected, transformed, or expanded atomic sentences.
Now, in my estimation, there are eight (or, with quantification, 28) atomic sentences for all natural languages. Luckily, there's no need for learners to memorize them. We only need to learn steps to pull them out of other sentences. This is the essence of message parsing.

Finally, you get to good stuff! How do I do those "message parses"?

I can't fit everything about them here, but their essentials are very simple. There are just four steps, and the third step contains an optional sub-step:
  1. ADD (ellipses).
  2. MOVE (transformations into a canonical word order, if one exists).
  3. SPLIT (sentences with connectives).
    1. SWAP (pro-forms with their referents, referents with pro-forms, or connectives with second-order modifiers).
  4. CUT (modifiers).
And, with these steps, there are two rules: 
  1. Do the step below only if you can't do any steps above.
  2. Do the steps above only if they help you do a step below.
Through these steps, an unparsed sentence's meaning becomes clearer. To demonstrate, I took French sentence out of my PollyGot database. I don't know French. However, with some work, I can figure out its message parse in a few minutes:
Note: The atomic sentences are the green ones.
I can then translate every parsed sentence, working bottom-up to translate the unparsed sentence:

Quelque chose généralise la douleur. Something generalizes the pain.
De la fatigue caractérise l’infection. Fatigue characterizes the infection.
Une douleur caractérise l’infection. A pain characterizes the infection.
La douleur est généralisée par quelque chose. The pain is generalized by something.
Une douleur et de la fatigue caractérisent l’infection. Pain and fatigue characterize the infection.
De la fièvre caractérise l’infection. Fever characterizes the infection.
La douleur est généralisée. The pain is generalized.
De la fièvre, une douleur et de la fatigue caractérisent l’infection. Fever, pain and fatigue characterize the infection.
De la fièvre, une douleur qui est généralisée et de la fatigue caractérisent l’infection. Fever, pain that is generalized, and fatigue characterize the infection.
L’infection est caractérisée par de la fièvre, une douleur qui est généralisée et de la fatigue. The infection is characterized by fever, pain that is generalized and fatigue.
La grippe est une infection. Influenza is an infection.
La grippe est une infection qui est caractérisée par de la fièvre, une douleur qui est généralisée et de la fatigue. Influenza is an infection that is characterized by fever, pain that is generalized and fatigue.
La grippe est une infection caractérisée par de la fièvre, une douleur généralisée et de la fatigue. Influenza is an infection characterized by fever, generalized pain and fatigue.

Normally, though, I don't parse sentences completely. I only parse them until I can understand them completely. That rarely requires going to the atomic level.

Wait, how can you know the OL's syntax well enough to do such a parse?

That will also be the topic of a later post. For now, all that matters is this: I didn't consult a grammar book. I looked for words that would help me do the steps. I used my knowledge of my natural languages, a bit of logical sense, an online dictionary, some machine translations, and searches for substrings. They guided the parse and helped me check my parsed sentences' grammar.

Best of all, it gradually became automatic. I grew a sense of when and where to apply these rules and checks. It got me through my third language. It's getting me through my fourth, fifth, and sixth languages. I'm not doing it obsessively, either. I just use this method when a sentence confuses me. That, too, becomes less and less frequent. That's why I impart this moral: When it comes to language learning, a breadth of experience is worth much more than a depth of analysis. So, if you are starting with your second language, you can still message-parse sentences from your first language. That practice can show you where and how to add, move, swap, and cut sentences in other languages.

On the other hand, you could be this joke. That's always fun.

2018-08-20

Language Without Metalanguage 1:
Messages Over Deep Structure

To Learn Languages Faster, Abandon Theory

Imagine I were teaching you arithmetic how most language centers teach languages. This is how it would read:
Here's how you get 2 + 2 = 4: Define every natural number by a successor function S(n) on the empty set {}. Every successive natural number as a union (∪) of all of its preceding natural numbers. Since 0 is the first natural number, because it has no preceding natural number, it is equivalent to the empty set {}. We denote the values of the successor set up to 4 with the following values:
  • 0 = {}
  • 1 = {{}} = {0}
  • 2 = {0, 1}
  • 3 = {0, 1, 2}
  • 4 = {0, 1, 2, 3}
Under this model, addition (a + b) is a binary operation defined by the following scheme of the successor function on the successor set of natural numbers:
  • a + 0 = a
  • a + S(b) = S(a + b)
Therefore, 2 + 2 = 2 + S(1) = S(2 + 1) = S(3) = 4.
Now, here's the question: Did this explanation make you better at arithmetic? If I give you a harder calculation, will you faithfully follow these steps and get to the correct result? I bet most of you won't. If you try, it will probably take much more time. This is the problem of metalanguage. The explanation is correct, but it's impractical because the surrounding facts are tangential and more complicated than just saying this:
1 + 1 + 1 + 1 = 4, and 1 + 1 = 2, so 2 + 2 = 4.
That explanation would be incomplete if you were a professional number theorist. However, most people who learn arithmetic don't have that goal in mind. Similarly, people who focus on grammatical structure are not training themselves to be good at a language. Instead, they're training themselves to be good at analyzing it. But, this is not to say that language theory does not have its place. Linguistic metalanguage is useful for building language curricula or analyzing languages' features. The problem, though, is that too many theorists waste learners' time. They're teaching language theory instead of teaching language.

Fine, but how can we learn a language's grammar without being taught it?

Well, there's a problem with the question. The phrase "learn a language's grammar" is ambiguous. There are two modes of learning grammar.

The first is what I've already criticized. It's to learn rules, and then to build sentences from those rules. It's a top-down approach. It's structural and analytical. And, it taxes learners. It forces people to (1) memorize patterns with categorical slots, (2) memorize grammatical categories, (3) remember a list of terms for each category, and then (4) plug those terms into their categorically correct places. A basic syntax tree can reveal this much:

"If you want to master several (types of) languages, then you need not study language theory."
To understand this sentence's grammar in the above way, you'd have to understand at least these categories and concepts: complement phrases (CP), complementizers (C), sentences (S), determiner phrases (DP), inflectional phrases (IP), determiners (D), nulls (), noun phrases (NP), adverbial phrases (RP), inflections (I), pro-forms (e.g., pronouns [N-pro]), verb phrases (VP), valency (e.g., transitive verbs [V-t]), part-of-speech transformations (→), adverbs (R), negations (R-neg), and auxiliaries (also I).

Complicated? Check. Theoretical? Check. Universal? Not quite. Exhausting? Most def.

And, I haven't even begun to explain how debatable the tree is. I didn't explain how "的話" is also a relativizer and a noun phrase that combines with a preceding sentence to generate a noun phrase that indicates the topic of the overall sentence. Even if I did, how important is that to you as a learner? How many articles on this subject are you going to read to find out the truth? The answer: zero.

Oh, and one final question: Who's going to teach this heavy theory to you? You may not know this, but most language instructors don't know their shit. I've taught languages, designed curricula, consulted on projects, and trained developers for over a decade. I've met only a handful of people who knew the relevant theory. Most normal people don't know anything about it. Many of them move to foreign countries and work as foreign-language instructors. Even if they're certified, they almost never learn the linguistics. The result is worse than receiving grammar instruction. You receive incorrect grammar instruction. Their limited and wrong folk grammar becomes your limited and wrong folk grammar. The blind lead the blind to blindness.
Ah, folk grammar! A nice drink of warm bullshit!
So, you don't need the theory and can't learn it easily. But, not all hope is lost. We can learn a language's grammar. We just do so by attending more to languages' messages and less to their structure. This lets us focus on languages' logical form (as understood in both logic and linguistics) as opposed to their theoretical structure. The techniques behind this approach will be discussed in future posts. Right now, though, an example should show the value of logical form over syntactic analysis when we study languages. Look at that same Chinese sentence when it's "message-parsed" instead:

Theory belongs in the background.
With this tree, we only see a language's surface structure. The maximally literal translations then help us understand what the sentences are saying. That alone can give us all of the semantics and syntax that we need. It just needs to be read attentively and from bottom to top. To prove this, I'll offer no instruction. Instead, I challenge you to translate these sentences:
接受挑戰

  • "You want to study theory."
  • "You master several types of theories."
  • "You need not master several types of languages."
  • "If you study theory, then you want to study several languages."
  • "你想學會幾種理論。"
  • "你不用想這麼做。"
  • "你不用學語言。"
  • "如果你想這麼做的話,你就想學這種語言。"
This is what competent "grammar instruction" looks like. The theory is left to the experts (in this case, me). Normal people get a product that eliminates the need to learn complex theory. That's because learners learn from the bottom up, not the top down. Competent materials should have always reflected this.

Okay, fine, but what can I do about that?

You mean, besides not paying money for crap products? You mean, besides using a free app (wink, wink) that was developed with this approach in mind? Well, I guess you could wait for my next posts to learn how to make these message parse trees for yourself, even if you don't fully understand the language.

More immediately, however, you can change how you think about language instruction. You're not learning math or history here. Languages don't reduce to top-down rules. They don't reduce to facts that you memorize and repeat. Language proficiency is a skill. Acquiring syntax is a product of that proficiency. It's not a goal to be sought. You don't learn a new language to learn how to conjugate its verbs correctly, or whatever. You conjugate verbs and say stuff correctly only after you've acquired the syntax, only after you've pursued the language and not the theory. Techne isn't episteme, and it never will be. They each have their roles to play, but unless you're a language nerd for fun or for pay, pursue the former. Nugget, sent.

2018-07-09

Gender; Or, Why US Pronoun Patrolling Will Fail

When They Insist On Being Called "Them"...

I have a rule about facts: They're boring. That is, that which is really true doesn't need advocates. Thus, as a corollary, anything that's politically charged is based on a lie. Sometimes, it's people with the power that defend nonsense. Think of examples like the Medieval Catholic geocentrism or slavers' biological justifications for racism. They were bullshit. They always had been bullshit. However, the vox populi was on their sides, and consensus is enough for bullshitters. This time, however, the bullshit is on the side of the unpopular view. And, when small groups of wrong people are very loud, many people seem to believe there's a "dialogue" worth having. There isn't, and there never was. That said, I aim to show that enforcers of transgender shifts in English grammar are no different from Westboro Baptist Church protesters. Their only differences are what delusions they worship and what facts they curse as a result.

If they're so wrong and doomed to fail, why should we even care?

To be brief, we shouldn't. If we ignore most nonsense, it will naturally go away. We shouldn't really care about gender beyond language. After all, gender is, first and foremost, a linguistic feature. This much is clarified by Michelle Cretella:

"Gender, as a term, prior to the 1950's, number one, did not refer to people; and, number two, was not in the medical literature. [...] And so, they [sexologists who invented sexual reassignment surgeries] basically looked at the word 'gender', which meant "male and female", referring to grammar— and you can go online. I went back to dictionaries in the 1700's, and you can actually see the definition[s] of gender all the way up. So, in the 1950's, one of the sexologists at the time was John Money, Dr. John Money, and they said, 'Well, we're going to take gender and say, for people, it means "the social expression of an internal sexed identity". That's what we're treating.' They pulled it out of the air."
And, as a feature, it's culturally arbitrary. A language can have a dozen genders or none. It has no direct binding to human sexuality, and there are only people of a few sexes (males, females, and some intersex folks). It's the same way with colors, but in the opposite direction. Most humans can distinguish around three million distinct colors. However, most languages only have a few dozen words for them. This alone has served as the key empirical disproof of linguistic determinism. That is, our words do not decide what things we can perceive or what really exists. Wittgenstein was wrong. Sapir and Whorf were wrong. We can move on.

Then, why are you still on about transgender people?

Trannies aren't the issue. It's this push for a transgender grammar that's annoying.

- "But, that means I'm not really a woman.
I'm just a guy with a mutilated penis."
- "Basically, yes."
Transgender grammar is just the opposite face of that same linguistic determinism. We can only identify sex in a few ways. Linguistic gender, when applied to people, refers to people with those sexual features. However, when they bloat a grammar with invented genders, new sexes don't magically come into being. Neither do they serve to "increase awareness" of other genders, because they made the bullshit up. Their only recourse in the English language is to appeal to "preferred pronouns", and that tells the whole story.

"Preference" Nonsense

First is the word "preferred". What is this preference? Well, all we can observe is a preference to be inconvenient to native English speakers. Their pronouns make no meaningful reference to people's sexual features. Nor do their pronouns have any clear features beyond individual preferences. So, their preference is just to say, "I want a unique pronoun, even though available pronouns already describe me." It's a plea for a false endowment. It's difference for the sake of difference.

"Pronoun" Nonsense

Second is the word "pronoun". This is telling for two reasons. First, it reveals a deep (even deliberate) ignorance of the relevant linguistics. Gender in language is all about agreement, and agreement is matching inflections between nouns and other connected parts of speech. Gender of this sort doesn't exist in English. Gender in English is only present in some nouns' morphemes and in pronouns. For example, English distinguishes "actors" from "actresses" and "waiters" from "waitresses". That's morphological. Also, English uses pronouns "he" and "she" to refer to male and female people. That's lexical.

English gender agreement only occurs in pronoun tracing. Take this sentence:
  • "John and Sue wanted to meet Mary, but she doesn't want to meet him."
This is only slightly ambiguous. "She" could refer to Sue or Mary. Most English speakers would assume "she" replaces "Mary". But, if these asshats got their way with "them", it would read like this:
  • "John and Sue wanted to meet Mary, but they doesn't want to meet them."
All I heard was,
"My so-called journey was a mistake."
Thanks a fucking lot, transgender community! Now, I have to force a noun-verb disagreement to suit your shitty preferences. Not only that, I also have to guess whether "them" refers to John and Sue, or just John, or just Sue. Yeah, and the absent anaphor means I have to make an extra cognitive effort to understand "they". Otherwise, I might infer that John and Sue don't want to meet themselves. Again, congratulations, trans people. You just asked us to break our language because you don't want to get clocked, or for some other stupid reason.

Worse yet, this inconvenience also extends to other languages. Spanish transgenders have been pushing to alter Spanish orthography to use suffixes "~x" or "~@" instead of "~o" and "~a". It's meant to avoid assigning "false genders" to people. The result? Less convenience, more confusion. In this case, it's so bad that Spanish speakers wouldn't be able to pronounce the sentences as read with such changes:
  • "El/la enfermerx me dijo que hay solo un@ médicx acá que es un(a) cirujanx pediátric@."
Exacto, que se cojan por el culo.

Or, consider Russian and German, which have a "neuter" gender. That doesn't mean that "neuter-gendered" people exist. It also doesn't mean that "neuter" gender is a human social role. Any claim otherwise is a category mistake.

More "Pronoun" Nonsense

Second, this inconvenience causes speakers to avoid any verbal interaction with those who make these dumb requests. It's supremely ironic. They seek more dialogue over gender. Then, they make requests (or assholish demands) that cause further isolation. It's another symptom of linguistic ignorance. The core syntax of a language changes for only a few reasons. I've previously explained some of them. However, these pronoun patrolmen can only hope to appeal to its convenience or group sensitivity. They're clearly failing in that first front. What about our wishes not to offend people?

Sorry, but history is not on their side, either. Plenty of special interests have tried to reform the English language. None of them stuck for more than a few years, and only among a small group of people. The proof is in the corpus data:
And, for you, the transgender community:
Trans people, most of us don't care that you want to masquerade as the opposite sex, or if you think you "defy traditional genders" somehow. Abnormal sexuality and genital mutilation don't make you special. We have no obligation to comfort your delusions. That's why you will never win this imaginary rights battle. Seeking normal speakers to meet your silly requests is not doing your kind any favors. If you become outraged, it's your own fault. You shouldn't have fallen for Dr. Money's profit scheme.

What should we do, then, if we don't want to offend them?

The only correct pronoun here is "whatever-the-fuck".
I'm really not the person to ask. Just use their proper names all the time, I guess. But, don't waste your life dismantling a language for the sake of people's feelings. Soon enough, activists will have abandoned this futile effort. They'll move on to getting animals elected mayor or some other stupid shit. They might actually succeed in that effort. I'd sooner vote livestock into office than dice my syntax to hell.