Stacking vs. Replacing the LMS [...]

From Eliot Masie:

  1. Stacking vs. Replacing the LMS/LCMS! While many organizations have grown beyond the current capacities of their Learning Management Systems and LCMS, there are significantly fewer corporations choosing to make the major capital and implementation investment of replacing their entire enterprise learning technology. Rather, we are seeing more “Stacking”, which means accepting the role of the existing LMS as the base system for the organization and then adding Stacks or Layers on top that will create added and more targeted functionality. Some of the Learning Stacks include:
  • Competency or Talent Management Layers.
  • Assessment or Feedback Layers.
  • Compliance or Regulatory Layers.
  • Career Development Layers.
  • Collaboration and Social Networking Layers.
  • Gamification or Engagement Layers.
  • Globalization Layers.

In other words, some organizations are shifting from replacing their LMS to adding these technologies on top of the LMS. Some may be limited to a specific line of business or segment of learners. Others are layered in almost as extensions of the LMS, using the core code for transaction tracking and shared data exchange – but the functionality is found in the layer. One might call it a “LMS Inside” approach.

Bad Roads Barrier to Self-Driving Cars [...]

An estimated 65 percent of U.S. roads are in poor condition, according to the U.S. Department of Transportation, with the transportation infrastructure system rated 12th in the World Economic Forum’s 2014-2015 global competitiveness report.

Tesla, Volvo, Mercedes, Audi (VOWG_p.DE) and others are fielding vehicles that can drive on highways, change lanes and park without human help. But they are easily flummoxed by faded lane markers, damaged or noncompliant signs or lights, and the many quirks of a roadway infrastructure managed by thousands of state and local bureaucracies.

In other developed countries, greater standardization of road signs and markings makes it easier for robot cars to navigate. In the U.S., however, traffic lights can be aligned vertically, horizontally or “dog-house” style in two columns. Pavement markings use paint with different degrees of reflectivity – or don’t exist at all.

“If the lane fades, all hell breaks loose,” said Christoph Mertz, a research scientist at Carnegie Mellon University. “But cars have to handle these weird circumstances and have three different ways of doing things in case one fails.” (Source)

See also Google Car Is a Glorified Train

Rob Cross on Networks and Job Performance [...]

Rob Cross on Networks and Job Performance:

“Traditionally, self-help books on networks focus on going out and building big mammoth rolodexes…What we’ve found is that this isn’t what high-performers do. What seems to distinguish the top 20% of performers across a wide-range of organizations is not so much a big network. In fact, there is usually a negative statistically significant likelihood of being a top performer and knowing a lot of people.”

Spiral of Facebook Silence [...]

Facebook interactions, like face-to-face interactions, are highly influenced by by expectations of agreement. What gets shared is what people think others will agree with.

Previous ‘spiral of silence’ findings as to people’s willingness to speak up in various settings also apply to social media users. Those who use Facebook were more willing to share their views if they thought their followers agreed with them. If a person felt that people in their Facebook network agreed with their opinion about the Snowden-NSA issue, they were about twice as likely to join a discussion on Facebook about this issue.

Avoiding Why [...]

By focussing on the how, we can keep conversational participants more fluid and nimble and less self-defensive.

Golden Rule 2. Avoid using the question “Why” This rule may seem counterintuitive because most people’s default question to find out more information about something is “why”. What most people don’t realise is that most of our thoughts throughout the day are constructed from half truths and poorly constructed ideas that we are influenced by because of the media. In hypnosis “why” is known as a reality constructing question which means that when you ask it to someone, they have to look inside them self, search their feelings and experiences in life and form a thought based off the question, and the more emotions get attached to the thought the stronger they feel about it.

When you ask people “why” they will start looking for reasons to justify their experience and why it may be true, and as a result the issue becomes much larger for the person and they get sucked into the void the emotions create. This can make our job much harder when we are trying to solve the issue for the other person. If you were trapped in the wilderness, would you rather fight this gigantic hungry grizzly bear that’s practically salivating at the chance to eat you, or would you rather fight the baby grizzly bear?

Instead of asking “why”, people generally mean “how”.

Bad Example “Jimmy, why do you have odd socks on?” This sends the mind looking for reasons why they have odd socks on and it also locks them into a position, which might be difficult to change later.

Good Example “Jimmy, how did you manage to put odd socks on?” This focuses people on the process of finding odd socks and gives us more room to break the odd sock problem down.
Note. Use the “Why” question when you are moving someone in a positive direction.

Good Example “Jimmy, why are you so happy today?” This sends the person looking for justifications and reasons why they happy – why not make people smile (Source)

Thinking this through it’s hard to translate this to some situations, but leaving it here for later — I do think the “reality-constructing” question is a big insight.

Pearson’s Forced Engagement [...]

Pearson (Luckin et. al., 2016) has announced that they are developing programs that will monitor students as they participate in group work, showing how well each student is participating (p. 27), using, for example “voice recognition (to identify who is doing and saying what in a team activity.” (p.34). This is designed to make sure students are participating according to the programmers’ ideas of what optimal participation is. 

This and other intrusions are designed to make sure students are focused on just the task in front them right now, and are participating in exactly the way the Pearson wants them to participate.  This strengthens an error nearly all schooling makes and makes true creative thinking and learning impossible.

Studies in creativity have revealed that “incubation” is a crucial aspect in the development of new ideas and understandings. After a period of intellectual struggling, of “wrestling” with a problem, progress, deeper understanding, often comes after a short period of intellectual rest, “an interval free from conscious thought” to allow the free working of the subconscious mind (Wallas, 1926, p. 95).  (Source)

Creativity and productivity is more often determined by relationship outside the group. See Why Learning Networks Must Be Open Networks

K-State Open Textbook Initiative [...]

Through the initiative, faculty can receive a stipend up to $5,000 to develop or adopt an alternate to a traditional textbook.

The funds can be given to either an individual faculty member or a team of faculty teaching multiple sections of the same course. A proposed project may include: 

  • The use or adaptation of an existing open access textbook
  • Combining library resources and/or high quality open educational resources (OER); and/or faculty-authored materials.  The project may also involve any media (articles, audio, video, web sites, etc.)
    If you have questions about what material to use or would like to know if the material you would like to use would qualify please do not hesitate to contact us.

Two of the co-developers of the initiative, Drs. Andy Bennett and Brian Lindshield, have successfully developed and implemented open textbooks. Dr. Bennett’s Elementary Differential Equations textbook features algorithms that create problem sets for students to practice. Dr. Lindshield’s Kansas State University Human Nutrition (HN 400) Flexbook was a finalist for the 2012 People’s Choice Award for the Most Open Resource. (Source)

Why Learning Networks Must Be Open Networks [...]

The bottom line? According to multiple, peer-reviewed studies, simply being in an open network instead of a closed one is the best predictor of career success.
In the chart, the further to the right you go toward a closed network, the more you repeatedly hear the same ideas, which reaffirm what you already believe. The further left you go toward an open network, the more you’re exposed to new ideas. People to the left are significantly more successful than those to the right.
In fact, the study shows that half of the predicted difference in career success (i.e., promotion, compensation, industry recognition) is due to this one variable. (Source)

Open Networks promote Serendipitous Exchange

Complex Contagions [...]

Homophily increases local spread at expense of global spread. Viral phenomena must spread locally and globally.

How does network structure affect diffusion? Recent studies suggest that the answer depends on the type of contagion. Complex contagions, unlike infectious diseases (simple contagions), are affected by social reinforcement and homophily. Hence, the spread within highly clustered communities is enhanced, while diffusion across communities is hampered. A common hypothesis is that memes and behaviors are complex contagions. We show that, while most memes indeed spread like complex contagions, a few viral memes spread across many communities, like diseases. We demonstrate that the future popularity of a meme can be predicted by quantifying its early spreading pattern in terms of community concentration. The more communities a meme permeates, the more viral it is. We present a practical method to translate data about community structure into predictive knowledge about what information will spread widely. This connection contributes to our understanding in computational social science, social media analytics, and marketing applications. (Source)

See also Sanders Filter Bubble, Streams Don’t Merge

Serendipitous Exchange [...]

Innovations never happen without good ideas. But what prompts people to come up with their best ideas? It’s hard to beat old-fashioned, face-to-face networking. Even Steve Jobs, renowned for his digital evangelism, recognized the importance of social interaction in achieving innovation. In his role as CEO of Pixar Animation Studios (a role he held in addition to being a cofounder and CEO of Apple Inc.), Jobs instructed the architect of Pixar’s new headquarters to design physical space that encouraged staff to get out of their offices and mingle, particularly with those with whom they normally wouldn’t interact. Jobs believed that serendipitous exchanges fueled innovation.1

A multitude of empirical studies confirm what Jobs intuitively knew.2 The more diverse a person’s social network, the more likely that person is to be innovative. A diverse network provides exposure to people from different fields who behave and think differently. Good ideas emerge when the new information received is combined with what a person already knows. But in today’s digitally connected world, many relationships are formed and maintained online through public social media platforms such as Twitter, Facebook and LinkedIn. Increasingly, employees are using such platforms for work-related purposes.3 (Source)

Related: Why Learning Networks Must Be Open Networks

WWU Open Education Initiative [...]

From conversation with J. F. at WWU: OER initiative is piggybacking on top of existing course development and design initiative, $4,000 stipend to participate. Second track to course redesign workshop. Instructional Design in morning session and OER in afternoon. Anything developed is released as OER. Competitive: 70% acceptance rate.

Proposed through Student Technology Fee process. Currently in Phase II. Asking for $20,000 for four faculty.

Virtual Isolation [...]

Everyone talks about VR’s sensory overload, but the most troubling part for me was the sensory deprivation. It’s a blindfold. You need to clear an area to move around, yet the Rift doesn’t do a very good job of telling you when you’re nearing the edges. Unless we start building adult playpens, teeth will be lost on the sides of coffee tables. Oculus warns users during setup to “allow adequate space all around and above you” and that “loss of balance may occur.”

When I’m on the inside, I also can’t shake a feeling of paranoia. There’s no way to tell what people around you are saying and doing. The Rift needs a button you press to automatically reveal your immediate surroundings. (Oculus designers and engineers are already thinking about this.)

Another big-picture VR problem: It’s boring to be around people who are using it. Remember when you got a Nintendo Wii and invited people over for doubles tennis? Nathan came over to my house to play with the Rift, and we ignored each other for hours while he pawed at the air in silence. You can play Rift games with friends over the Internet but, despite being developed by Facebook, it offers few other ways to connect with people. (Source)

Binary Utopian Critique [...]

Mike Caulfield discusses the “banal uselessness of the binary utopian critique”:

I was watching Jesse Stommel at NWeLearn this past week give an excellent presentation on grading. In it he suggested a number of alternatives to traditional grading, and outlined some of the ways that traditional grading is baked into the system.

And the end of the talk, the inevitable hand: “Your presentation seems so BINARY,” says the questioner, “Why is it so either/or? Why can’t it be both/and?”

Sigh. (Source)

The problem with this?

Saying “Why is this so binary?” when presented with an alternate, minority vision is simply a way of supporting the status quo, by not engaging with the reality that the dominant paradigm is NOT “both/and” but rather “almost entirely this”. The world of the person making the “utopian binary” critique is one where they get to ignore the existing disparities the binary calls to light — a trick most recently seen in the ridiculous #alllivesmatter hash tag: “But why single out black lives?”

The “utopian” critique is very similar —

Them: “If this cannot solve all problems, then how can we be excited about it?”

Me: “But I didn’t say it solved all problems!”

Them: “Aha! So you admit it doesn’t solve anything!”

Me: “Um, which one of us is utopian again?”

Looked at another way, theory is a lens, and lenses by definition distort. That’s the function of a lens. See Nuance Traps

Nuance Traps [...]

In the article “Fuck Nuance” Kieran Healy argues that the process of theorizing is based on abstraction, and as such must attempt to avoid nuance where possible so that its claims and value (or lack of value) are more evident and debatable. When people present such abstracted claims, they are often accosted by people who say “Well, couldn’t it be both/and?” and “Isn’t it really more complex than that?”

As opposed to “true nuance”, Healy calls these anti-theoretical impulses “Actually-Existing Nuance”:

Actually-Existing Nuance is … like a free-floating demand that something be added. When faced with a problem that is hard to solve, or a line of thinking that requires us to commit to some defeasible claim, or a logical dilemma we must bite the bullet on, the nuance-promoting theorist says, “But isn’t it more complicated than that?”; or “Isn’t it really both/and?”; or “Aren’t these phenomena mutually constitutive?”; or “Aren’t you leaving out [something]?”; or “How does the theory deal with Agency, or Structure, or Culture, or Temporality, or Power, or [some other abstract noun]?”. This sort of nuance is, I contend, fundamentally anti-theoretical. It blocks the process of abstraction that theory depends on, and inhibits the creative process that makes theorizing a useful activity. (Source)

This results in a series of Nuance Traps:

  • The trap of the “fine-grain”. This misidentifies theory as a contest of description. But at some level of Actually-Existing Nuance you are not involved in theory at all anymore — you are merely describing phenomena.
  • The over-extension of theoretical frameworks. In this case theories eat and merge with counter-theories and “prerequisites” until they encompass so much that they are neither refutable or useful.
  • The nuance of the connoisseur. This is the idea that only someone engaged with Actually-Existing Nuance can appreciate the “richness and complexity of the world”. This, again, is a near-complete rejection of the theoretical enterprise.

Mike Caulfield has noted something similar. See Binary Utopian Critique

The Rise of Partisan Hate [...]

The major change in the past 10 years has been the affective disposition towards the opposing party. People now hate the opposing party.


Here’s the NY Times:

During the Carter administration, cross-party ratings were not much below 5o — just slightly cool. But Democrats’ ratings of Republicans go steadily down from there, falling to 33 degrees under George W. Bush and a frigid 17 under Barack Obama. The Republicans show a slightly different trajectory, holding steady during the twelve-year Reagan-Bush period. But beginning with the Clinton years, Republicans sour on Democrats and join them in intensifying mutual dislike (18 degrees) during the Obama administration. To put those low scores in perspective, they are even lower than Democrats’ ratings of Richard Nixon in the years after Watergate.
So far the story has been quite similar for both parties. But the third graph shows us why it’s the Republicans who now seem to be more radicalized, energized and opposed to compromise.

Basically, the Republicans and Democrats, on average, trust the government about the same. The difference is that the Republicans don’t trust the government when Democrats are in power, at least since Carter (and, conversely, are very favorably inclined towards the government when it is helmed by a Republican president.

Negative Feelings about Opposing Party [...]

The major change in the past 10 years has been the affective disposition towards the opposing party. People now hate the opposing party.


Here’s the NY Times:

During the Carter administration, cross-party ratings were not much below 5o — just slightly cool. But Democrats’ ratings of Republicans go steadily down from there, falling to 33 degrees under George W. Bush and a frigid 17 under Barack Obama. The Republicans show a slightly different trajectory, holding steady during the twelve-year Reagan-Bush period. But beginning with the Clinton years, Republicans sour on Democrats and join them in intensifying mutual dislike (18 degrees) during the Obama administration. To put those low scores in perspective, they are even lower than Democrats’ ratings of Richard Nixon in the years after Watergate.
So far the story has been quite similar for both parties. But the third graph shows us why it’s the Republicans who now seem to be more radicalized, energized and opposed to compromise.

Basically, the Republicans and Democrats, on average, trust the government about the same. The difference is that the Republicans don’t trust the government when Democrats are in power, at least since Carter (and, conversely, are very favorably inclined towards the government when it is helmed by a Republican president.

Five Channels of Political Tendency [...]

Jonathan Haidt talks about differences between parties:

The results of thousands of respondents in several countries provides clear evidence for a divergence of importance placed on these five foundations of morality.  Liberals’ value – Harm/Care high, then Fairness/Reciprocity, then a big drop to Authority/Respect and In-group/Loyalty, then least Purity/Sanctity.  Conservatives’ value – Harm/Care lower than liberals but place it at the top of their lists as well.  Authority is a close second followed closely by In-group/Loyalty, and Purity/Sancitity, with Fairness/Reciprocity at the bottom.

Graph results for 23,684 participants within the USA; more studies at www.yourmorals.c (Source)

Bunker’s Backfire [...]

Archie Bunker and the original “bad fan” problem.

CBS arranged for extra operators to take complaints from offended viewers, but few came in—and by Season 2 “All in the Family” was TV’s biggest hit. It held the No. 1 spot for five years. At the show’s peak, sixty per cent of the viewing public were watching the series, more than fifty million viewers nationwide, every Saturday night. Lear became the original pugnacious showrunner, long before that term existed. He produced spinoff after spinoff (“cookies from my cookie cutter,” he described them to Playboy, in 1976), including “Maude” and “The Jeffersons,” which had their own mouthy curmudgeons. At the Emmys, Johnny Carson joked that Lear had optioned his own acceptance speech. A proud liberal, Lear had clear ideological aims for his creations: he wanted his shows to be funny, and he certainly wanted them to be hits, but he also wanted to purge prejudice by exposing it. By giving bigotry a human face, Lear believed, his show could help liberate American TV viewers. He hoped that audiences would embrace Archie but reject his beliefs.

Yet, as Saul Austerlitz explains in his smart new book, “Sitcom: A History in 24 Episodes from ‘I Love Lucy’ to ‘Community,’ ” Lear’s most successful character managed to defy his creator, with a “Frankenstein”-like audacity. “A funny thing happened on the way to TV immortality: audiences liked Archie,” Austerlitz writes. “Not in an ironic way, not in a so-racist-he’s-funny way; Archie was TV royalty because fans saw him as one of their own.” (Source)

This is known as the problem of the Bad Fan

Against Empathy [...]

Empathy is sometimes touted as key to mutual understanding. But what if empathy were the problem?

Paul Bloom, psychologist and Yale professor, argues that empathy is a bad thing—that it makes the world worse. While we’ve been taught that putting yourself in another’s shoes cultivates compassion, it actually blinds you to the long-term consequences of your actions. In this animated interview from The Atlantic, we hear Bloom’s case for why the world needs to ditch empathy.


Torturers need high amounts of empathy to understand what is painful and what is not. The Empathy of Torturers

Compassion Fatigue argues compassion is a limited resource, often wasted on trivialities.

Online Disinhibition Effect argues that lack of face-to-face empathy is part of online disinhibition.

Empathy is tribal. See Red Team Switch

Orientalism [...]

Edward Said’s signature contribution to academic life is the book Orientalism. It has been influential in about half a dozen established disciplines, especially literary studies (English, comparative literature), history, anthropology, sociology, area studies (especially middle east studies), and comparative religion. However, as big as Orientalism was to academia, Said’s thoughts on literature and art continued to evolve over time, and were encapsulated in Culture and Imperialism (1993), a book which appeared nearly 15 years after Orientalism (1978).

Put highly reductively, the development of his thought can be understood as follows: Said’s early work began with a gesture of refusal and rejection, and ended with a kind of ambivalent acceptance. If Orientalism questioned a pattern of misrepresentation of the non-western world, Culture and Imperialism explored with a less confrontational tone the complex and ongoing relationships between east and west, colonizer and colonized, white and black, and metropolitan and colonial societies.

Said directly challenged what Euro-American scholars traditionally referred to as “Orientalism.” Orientalism is an entrenched structure of thought, a pattern of making certain generalizations about the part of the world known as the ‘East’. As Said puts it:

“Orientalism was ultimately a political vision of reality whose structure promoted the difference between the familiar (Europe, West, “us”) and the strange (the Orient, the East, “them”).”

Just to be clear, Said didn’t invent the term ‘Orientalism’; it was a term used especially by middle east specialists, Arabists, as well as many who studied both East Asia and the Indian subcontinent. The vastness alone of the part of the world that European and American scholars thought of as the “East” should, one imagines, have caused some one to think twice. But for the most part, that self-criticism didn’t happen, and Said argues that the failure there –- the blind spot of orientalist thinking –- is a structural one. (Source)

Black Stats [...]

Black people account for less than a third of welfare recipients. Black parents are more involved in their kids homework than any other racial demographic. There are 600,000 more black men in college than in prison. There has been huge progress in degree attainment since the 1970s. The level of community service in the black community is much higher than in comparable communities. The book Black Stats writes up the persistent myths we carry around, both liberals and conservatives, that make our public discourse so counter-productive.

IED and Toxoplasmosis [...]

Coccaro told Live Science he was intrigued by the body of scientific literature linking toxoplasmosis with psychiatric disorders. For this study, his research team recruited 358 adults. About one-third had IED, defined by the Diagnostic and Statistical Manual of Mental Disorders as recurrent, impulsive, problematic outbursts of verbal or physical aggression disproportionate to the situations that trigger them. Another one-third were individuals diagnosed with another psychiatric disorder (not IED). And the remaining third were healthy controls with no psychiatric history.
The research team found that 22 percent of the people with IED tested positive for toxoplasmosis exposure, compared with only 9 percent of the healthy control group. About 16 percent of the group with other psychiatric disorders tested positive for toxoplasmosis, too.

Bikeshedding [...]

Parkinson’s law of triviality is C. Northcote Parkinson’s 1957 argument that members of an organisation give disproportionate weight to trivial issues.[1] He observed that a committee whose job was to approve the plans for a nuclear power plant spent the majority of its time on discussions about relatively minor but easy-to-grasp issues, such as what materials to use for the staff bike-shed, while neglecting the proposed design of the plant itself, which is far more important but also a far more difficult and complex task

The Compendious Book on Calculation by Completion and Balancing [...]

The Compendious Book on Calculation by Completion and Balancing (Arabic: الكتاب المختصر في حساب الجبر والمقابلة‎, Al-kitāb al-mukhtaṣar fī ḥisāb al-ğabr wa’l-muqābala;[1] Latin: Liber Algebræ et Almucabola) is an Arabic treatise on mathematics written by Persian polymath Muḥammad ibn Mūsā al-Khwārizmī around AD 820 while he was in the Abbasid capital of Baghdad. Translated into Latin by Robert of Chester in the mid-12th century, it introduced the term “algebra” (الجبر, al-ğabr) to European languages including English. The Compendious Book provided an exhaustive account of solving for the positive roots of polynomial equations up to the second degree.[2] (Source)

Anonymous Credentials [...]

Credentials can be granted that are not reliant on giving identifying details

Anonymous credential systems allow users to authenticate themselves in a privacy-preserving manner. (Source)

To The Big House [...]

Testing to see if I can get a card here to appear at the main Wikity.

Why am I confused?

Because you ate too much for breakfast

Educators’ Iterative Literacy Practice in CLMOOC [...]

The Connected Learning Massive Open Online Collaboration (CLMOOC) is an online professional development experience designed as an openly networked, production-centered, participatory learning collaboration for educators. Addressing the paucity of research that investigates learning processes in MOOC experiences, this paper examines the situated literacy practices that emerged as educators in CLMOOC composed, collaborated, and distributed multimediated artifacts. Using a collaborative, interactive visual mapping tool as participant-researchers, we analyzed relationships between publically available artifacts and posts generated in one week through a transliteracies framework. Culled data included posts on Twitter (n = 678), a Google+ Community (n = 105), a Facebook Group (n = 19), a blog feed (n = 5), and a “make” repository (n = 21). Remix was found to be a primary form of interaction and mediator of learning. Participants not only iterated on each others’ artifacts, but on social processes and shared practices as well. Our analysis illuminated four distinct remix mobilities and relational tendencies—bursting, drifting, leveraging, and turning. Bursting and drifting characterize the paces and proximities of remixing while leveraging and turning are activities more obviously disruptive of social processes and power hierarchies. These mobilities and tendencies revealed remix as an emergent, iterative, collaborative, critical practice with transformative possibilities for openly networked web-mediated professional learning. (Source)

Marilyn Jorgenson Reece [...]

The three-level San Diego-Santa Monica freeway interchange — where the 405 meets the 10 — was the first interchange in California designed by a female engineer. That engineer was Marilyn Jorgenson Reece. In fact, she was the first woman in California to be registered as a civil engineer.

Marilyn’s accomplishments inspired not just engineers, but countless women who wanted to go into engineering and other professions yet were hesitant to redraw the boundaries.

Debra Bowen, former California Secretary of StateMarilyn Reece had an impact on a number of the roadways we travel today.Marilyn Reece had an impact on a number of the roadways we travel today. (California Department of Transportation)

Urban critic Reyner Banham described the 405-10 interchange in his book “Los Angeles: The Architecture of Four Ecologies” in glowing terms: “The Santa Monica/San Diego intersection is a work of art, both as a pattern on the map, as a monument against the sky, and as a kinetic experience as one sweeps through it.

“That was by design. In 1995, Reece told the Los Angeles Times that she had designed the interchange with aesthetics in mind, putting her “heart and soul into it.”

“It is very airy. It isn’t a cluttered, loopy thing,” she said of the interchange, which was completed in 1964. The idea was to keep traffic moving at high speeds, and to allow drivers to go 55 mph, the roadway needed long, sweeping curves. “That was so you didn’t have to slam on the brakes, like you do on some interchanges.” (Source)

Internet Teaches AI Chatbot Racism [...]

The Internet managed to teach Microsoft’s new AI chatbot racism, anti-semitism, sexism, and conspiracy theories in one day.

Microsoft’s Tay AI is youthful beyond just its vaguely hip-sounding dialogue — it’s overly impressionable, too. The company has grounded its Twitter chat bot (that is, temporarily shutting it down) after people taught it to repeat conspiracy theories, racist views and sexist remarks. We won’t echo them here, but they involved 9/11, GamerGate, Hitler, Jews, Trump and less-than-respectful portrayals of President Obama. Yeah, it was that bad. The account is visible as we write this, but the offending tweets are gone; Tay has gone to “sleep” for now. (Source)

Screencaps from CBS News in LA. (Source)

Researchers could have predicted this. A similar issue happened with Watson. See Foul-Mouthed Watson

There were numerous other issues with Tay. See Eliza from Hell

Speaking of conspiracy theories, statistical modeling can help debunk them. See Failure Curves for Conspiracies

Foul-Mouthed Watson [...]

IBM’s Watson AI could learn from text via algorithm. But learning is never neutral, and programmers found themselves having to scrub things like Urban Dictionary from Watson’s memory.

“When IBM’s current superstar AI, Watson, was fed the contents of the Internet, it gorged itself on the worst of UrbanDictionary. To an unbiased algorithm, the vulgarities and obscenities were epistemological truth with equal standing to its troves of medical knowledge. Caught embarrassed with a foul-mouthed program, IBM was forced to scrub clean its database.” –

From Fortune magazine, via the Atlantic:

Watson couldn’t distinguish between polite language and profanity — which the Urban Dictionary is full of. Watson picked up some bad habits from reading Wikipedia as well. In tests it even used the word “bullshit” in an answer to a researcher’s query.

Ultimately, Brown’s 35-person team developed a filter to keep Watson from swearing and scraped the Urban Dictionary from its memory.” – Source

But what if profanity is important? See Useful Profanity

SB 281 [...]

Georgia bill provides that K-12 students can (among other things) opt out of LMS use.

A BILL to be entitled an Act to amend Article 19 of Chapter 2 of Title 20 of the Official Code of Georgia Annotated, relating to instructional materials and content in elementary and secondary education, so as to require schools to provide certain information to students and parents prior to using any digital-learning platform; to provide for definitions; to provide for destruction of student data collected through a digital-learning platform; to provide the opportunity to opt out; to provide for legislative findings; to provide for related matters; to repeal conflicting laws; and for other purposes. (Source)

The bill text on the opt-out reads:

(e) Unless the school determines and declares the platform to be essential to its educational mission with an explanation of the basis for such determination, eligible students or parents or guardians shall be allowed to opt out of using any digital-learning platform. Students who have opted out shall be provided traditional instruction in the academic content covered by such digital-learning platform.”

There’s a parallel here to early privacy laws passed in the 1970s. See Defensive Computing

One solution? Collect less data. See Data Is a Toxic Asset

This can be seen as an example of Digital Dualism.

Transparent and Open [...]

 What it means to be Open

Open is a quality of participation. Something is open when other people besides the owner or initiator can participate in it. People can participate by watching (yes, lurkers participate), by taking it off for their own use (aka ‘forking it’), by interacting about it with others, or by contributing to it themselves.

The opposite of open is closed. A dance rehearsal is closed, while a dance party is open.

 ##What it means to be Transparent

Transparent is a quality of information. Something is transparent when it is easy to understand.

When a process, situation or action is transparent, you can identify:

  • What is being done,
  • To what degree,
  • By whom and to whom,
  • When it will transpire, and
  • Why it works the ways it does.
    Transparency is ‘the ease with which a practice can be identified and understood’. (Source)

Eliza From Hell [...]

Eliza, the famous “toy AI”, used to be able to talk to people going through trauma fairly well by focusing on questions. Microsoft’s now AI Tay? Not so much. Here a tweeter tells Tay they want to kill themselves; Tay’s response is “i feel you”.

One other thing comes to mind looking at Tay’s performance: the Turing Test was originally conceived as an experiment involving two adults. Recent attempts to emulate emotionally stunted tweenagers with AIs are interesting, but cheating, really.

We’ll see if the AI gets better. It may. Let’s hope.

Tay’s tweetstream.

Script Theory explains the centrality of experience to intelligence.

The original paper on Eliza pdf

End of Protest [...]

Don’t protest the same way twice. If there’s a single message to be gleaned from Occupy Wall Street co-initiator Micah White’s idea-packed polemic against conventional protest, The End of Protest: A New Playbook for Revolution, that’s probably it. (Source)

Google Results Influence Independents [...]

Google results can have a huge impact on how independents view candidates.

Beyond market control, the algorithms powering these platforms can wade into murky waters. According to a recent study from the American Institute for Behavioral Research and Technology, information displayed in Google could shift voting preferences for undecided voters by 20 percent or more — all without their knowledge. Considering how narrow the results of many elections can become, this margin is significant. In many ways, Google controls what information people see, and any bias, intentional or not, has a potential impact on society. (Source)

WeChat and Uber [...]

But their scale is also concerning. For example, Chinese messaging service Wechat (which is somewhat like Twitter) recently used its popularity to limit market choice. The company banned access to Uber to drive more business to their own ride-hailing service. Meanwhile, Facebook engineered limited web access in developing economies with its Free Basics service. Touted in India and other emerging markets as a solution to help underserved citizens come online, Free Basics allows viewers access to only a handful of pre-approved websites (including, of course, Facebook). India recently banned Free Basics and similar services, claiming that these restricted web offerings violated the essential rules of net neutrality. (Source)

The Malware Museum [...]

The Malware Museum is a collection of malware programs, usually viruses, that were distributed in the 1980s and 1990s on home computers. Once they infected a system, they would sometimes show animation or messages that you had been infected. Through the use of emulations, and additionally removing any destructive routines within the viruses, this collection allows you to experience virus infection of decades ago with safety. (Source)

A Hamburger is More Than a Sandwich [...]

Sure, bread endpoints, but it is a unique structure, putting aside atrocities such as tomatoes and beets.

Mike Will Never Believe This Happened [...]

A Wikity is born in the house of the dog.

Bacon Dog

Alan Levine’s Wikity experiment can be found here.

Free as in DARPA [...]

When we talk about free software that really scaled, tracing back the funding usually gets you to DARPA or government funded univestiry initatives. Unfunded OSS mainly capitalized on the broad achievements of DARPA projects.

All of this incredible technology that can be dated back decades, primarily originated from projects funded through DARPA. Berkeley’s CSRG, which created BSD based on AT&T’s UNIX operating system, was funded primarily by DARPA, as were projects that brought forth TCP/IP, Arpanet, and many others.

If you like Mac OS, thank the government – because it sucked until it started incorporating Unix. If you like the Internet, thank the government. Code was made public and free as a result of its federal funding sources, and even today most code is released in a similar way by FFRDCs or other government-funded organizations. (Source)

There’s a pattern here — the truly transformative open stuff is all about the government, because only the government can afford to invest in so many things that fail in order to find the one project that succeeds. This use of government support — to fund the crazy ideas — is largely a thing of the past, and so we plod onward in models developed at the height of the Cold War, because no company can offer any product that violates consumer expectations and expect to stay afloat.


Social Media and Politics: Introduction [...]

The following materials are on the subject of social media and politics, and aim to show the many ways in which social media is impacting political thought and the many ways in which political dialogue is impacting social media.

Abstainer Bias [...]

Taking out abstainers provides a simple check on J-Curve patterns in data. Are they truly J-Curve patterns, or are you just seeing abstainer bias. The answer for alcohol seems to be that abstainer bias rules.

Drinking a little bit of alcohol is good for you, right? That, at least, is the conventional wisdom lurking in the back of your mind as you nurse your second glass of wine on a Tuesday night.

But does this really make sense? I mean, apart from the fact that “occasional” drinkers should be way closer to the Y axis, why would having less than one drink a week bring roughly the same benefit as downing one or two per day? Something doesn’t quite add up here, and that’s what these horrid researchers have discovered as well. “A fundamental question is, who are these moderate drinkers being compared against?” lead author Tim Stockwell, director of the University of Victoria’s Centre for Addictions Research in British Columbia, noted in a statement.

The problem here has to do with something Stockwell calls “abstainer bias.” Because there’s a difference between just not drinking and not drinking because you have serious health problems, or it’s killing your marriage, or whatever.

So what happens when you account for this abstainer bias? Well, things aren’t looking so good for you now, are they, lush? (Source)

The Salt J-Curve might be another example of this.

Path:: Social Media and Politics [...]

Anger Spreads Fastest
Analytics of Empathy
Shame, Guilt and Social Media
Death of the Longest Shortest Time Mamas
Too Many People Have Peed in the Pool
Not Just the Trolls
Reducing Abuse on Twitter
Sanders Filter Bubble
Berners-Lee on a Harmonious Web
Own Worst Enemy

Sanders Filter Bubble [...]

Many Sanders supporters never saw critiques of Sanders.

Sanders was able to broaden his appeal among liberals despite the fact that many prominent liberal pundits — including Paul Krugman, Jonathan Chait, Kevin Drum, and Jamelle Bouie were attacking Sanders for having half-baked policy proposals and an unrealistic political strategy. One big reason these attacks failed is that a lot of Sanders fans never saw them.

People on /r/politics aren’t just reading more articles about Sanders, they’re also overwhelmingly getting articles that are pro-Sanders. Articles that criticize Sanders or make the case for Hillary Clinton (or, for that matter, any of the Republican candidates) are much less likely to reach the front page.

And this doesn’t just happen on Reddit. Similar things tend to happen on social media sites like Facebook and Twitter. These sites are all about sharing with your friends. And because people tend to have similar politics as their friends, this means social media tends to reinforce what people already believe.

“It’s easier than ever to surround yourself with information that confirms what you already believed was true,” says Eli Pariser, a liberal activist who founded the social news site Upworthy. In a 2011 book, Pariser dubbed this phenomenon a “filter bubble.”

Last year, researchers at Facebook documented the extent of this phenomenon. Liberal Facebook users see more liberal articles on their newsfeeds. Conservatives see more conservative articles. The Facebook researchers argued that this wasn’t really Facebook’s fault — it simply reflects the kinds of friends people have and the kinds of articles they share. But regardless of whose fault it is, the result is the same. (Source)

Varieties of Writer’s Block [...]

Not all unhappy writers were created equal, however. They fell, Barrios and Singer discovered, into four general types. In one group, anxiety and stress dominated; to them, the main impediment to writing was a deep emotional distress that sapped the joy out of writing. In another group, unhappiness expressed itself interpersonally, through anger and irritation at others. A third group was apathetic and disengaged, while a fourth tended to be angry, hostile, and disappointed—their emotions were strongly negative, as opposed to merely sad. These differences would turn out to be consequential. Different kinds of unhappy writers, Barrios and Singer discovered, are blocked differently.

There are some experiences that almost all blocked writers have in common. Almost all of them experience flagging motivation; they feel less ambitious and find less joy in writing. They’re also less creative. Barrios and Singer found that blocked individuals showed “low levels of positive and constructive mental imagery”: they were less able to form pictures in their minds, and the pictures they did form were less vivid. They were less likely to daydream in constructive fashion—or to dream, period.

The surprise was that these motivational and creative shortfalls expressed themselves differently for the different kinds of unhappy writers. The first, more anxious group felt unmotivated because of excessive self-criticism—nothing they produced was good enough—even though their imaginative capacity remained relatively unimpaired. (That’s not to say that their imaginations were unaffected: although they could still generate images, they tended to ruminate, replaying scenes over and over, unable to move on to something new.) The second, more socially hostile group was unmotivated because they didn’t want their work compared to the work of others. (Not everyone was afraid of criticism; some writers said that they didn’t want to be “object[s] of envy.”) Although their daydreaming capacity was largely intact, they tended to use it to imagine future interactions with others. The third, apathetic group seemed the most creatively blocked. They couldn’t daydream; they lacked originality; and they felt that the “rules” they were subjected to were too constrictive. Their motivation was also all but nonexistent. Finally, the fourth, angry and disappointed group tended to look for external motivation; they were driven by the need for attention and extrinsic reward. They were, Barrios and Singer found, more narcissistic—and that narcissism shaped their work as writers. They didn’t want to share their mental imagery, preferring that it stay private. (Source)

Nuance and Hysteria [...]

Here’s the problem: Virtually everyone is in hysteria over this. The only way to push back is to get people’s attention, and that means writing without too much nuance. Under the circumstances, I’ve actually been pretty restrained. (Source)

Double Breakfast [...]

Students who eat two breakfasts are less likely to be overweight than students who eat none.

Published in the journal Pediatric Obesity, the study included 584 middle school students from 12 schools in an urban school district where breakfast and lunch are provided to all students at no cost. Researchers tracked the students’ breakfast-eating locations and patterns, and their weight over a two-year period from 5th grade in 2011-2012 to 7th grade in 2013-2014.


  • Students who skipped or ate breakfast inconsistently were more than twice as likely to be overweight or obese compared with students who ate double breakfasts.
  • The weight changes from 5th to 7th grade for the students who ate double breakfasts was no different than the weight changes measured for all of the other students.
    “When it comes to the relationship between school breakfast and body weight, our study suggests that two breakfasts are better than none,” says Marlene Schwartz, a study author and director of the Rudd Center. (Source)

Joe Pyne’s Radio Show [...]

Pyne’s radio show from the 1950s and 60s was an early pioneer of the confrontational style of talk show that would later come to dominate AM radio and cable TV.

There are many documented cases of Pyne getting into altercations with people on his show. He preferred controversial guests such as Sam Sloan and invited members of the Ku Klux Klan, the American Nazi Party, and followers of murderer Charles Manson. Pyne argued this was educational, since it exposed these violent groups to the public eye.[9] The Joe Pyne Show was not only verbally confrontational: at times the conflict became physical, with chairs being thrown at Pyne by the interviewee. If the “discussion” got too heated, the guest would often walk off, or Pyne would himself throw the guest off the show. Still, Pyne once described himself as an “overly compensating introvert.” (Source)

End of March Wikity Roadmap [...]

  1. Add in-place posting from catalog view (via frame/pop-up).
  2. Add state variable for target site.
  3. In catalog view, if not admin/editor but has target site, Update becomes Copy.
  4. If neither target var or editor rights. then update replaced by login/set target prompt.
  5. A list is maintained of everything forked from the site. This can be repurposed as a path.


  1. Fix the way bullet indent displays. I don’t like it.

Animal House Politics [...]

Still, with tongue not entirely planted in cheek, Greaney — who also wrote the famed rodeo anthem scene in “Borat” — accepts some of the collective blame for allowing a Trump to flourish as a candidate.

“I blame us — I blame the culture of comedy,” he tells Comic Riffs. Greaney posits that the movie “Animal House,” which “mocked the norms of decent behavior,” helped counterculture viewpoints launch into the American comedic mainstream, thus fostering such establishment-mocking shows as “The Simpsons.” Perhaps all this mockery, he says, somehow gave rise to an “anti-political establishment” candidate like Trump. (Source)

Sample Table [...]

President Year
1980 Reagan
1988 Bush
1992 Clinton
2000 Bush
2008 Obama

You might want to check out this page.
You might want to check out this page.

Silicon Valley Monocrop [...]

Successful startups are born at places like Y-Combinator and go through the venture capital gauntlet frictionlessly — the same way big factory farms across America churn out cheap corn and beef.
Yet there is a problem with monocrop culture: ultimately, you deplete the soil. In a recent podcast with Kleiner Perkins partner Randy Komisar and legendary Silicon Valley “coach” Bill Campbell — mentor to Steve Jobs and Larry Page — Randy asked whether, over time, entrepreneurs were solving increasingly frivolous problems. Campbell responded, tellingly, that entrepreneurs solve problems that they can understand.
“While you and I might think Snapchat is frivolous,” Campbell said, “my grandchildren find it a great solution for how better to communicate with their friends.”
Snapchat may be solving an important problem for well-connected young people in America who don’t have to worry about basic needs. But whether it’s unemployed young people in St. Louis looking for their next paycheck or a family in Flint, Michigan worried about clean water, many Americans have more immediate problems.
But the entrepreneurs there — “those people” —often don’t have access to resources or opportunities to solve their problems. And Silicon Valley can’t foresee a future where St. Louis or Flint could create the jobs of the future. (Source)

Solving the Rider’s Problem [...]

And we don’t blame Travis Kalanick (actually we do, but that’s not the point of this story). Uber’s founders’ experiences are as riders, not drivers. But imagine an ownership structure in which, for example, drivers could earn fractional equity in the company for each ride they gave. What if a percentage of the $50B valuation were shared among the drivers, based on a merit-based system?
We’re not saying that Uber should do this (they can’t at this stage); we are saying that if Uber’s leadership had different lived experiences, the company might look different. (Source)

Problems with Test-Driven Development (TDD) [...]

The problems programmers have with test-driven development sound surprisingly similar to the problems teachers have with assessment-first education.

But, I’m not giving up on TDD because of the (well-known) difficulties in writing automated tests for GUIs. I’m giving up because I think it encourages conservatism in programming, it encourages you to make design decisions that can be tested rather than the right decisions for the program users, it makes you focus on detail rather than structure and it doesn’t work very well for a major class of program problems – those caused by unexpected data.

Because you want to ensure that you always pass the majority of tests, you tend to think about this when you change and extend the program. You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests.
It is easier to test some program designs than others. Sometimes, the best design is one that’s hard to test so you are more reluctant to take this approach because you know that you’ll spend a lot more time designing and writing tests (which I, for one, quite a boring thing to do)
The most serious problem for me is that it encourages a focus on sorting out detail to pass tests rather than looking at the program as a whole. I started programming at a time where computer time was limited and you had to spend time looking at and thinking about the program as a whole. I think this leads to more elegant and better structured programs. But, with TDD, you dive into the detail in different parts of the program and rarely step back and look at the big picture.
In my experience, lots of program failures arise because the data being processed is not what’s expected by the programmer. It’s really hard to write ‘bad data’ tests that accurately reflect the real bad data you will have to process because you have to be a domain expert to understand the data. The ‘purist’ approach here, of course, is that you design data validation checks so that you never have to process bad data. But the reality is that it’s often hard to specify what ‘correct data’ means and sometimes you have to simply process the data you’ve got rather than the data that you’d like to have. (Source)

451 Responses for DMCA Takedowns [...]

In December 2015, the IETF ratified status code 451. A 451 response indicates that a resource is unavailable due to an external legal request.

The GitHub API will now respond with a 451 status code for resources it has been asked to take down due to a DMCA notice. For example: (Source)

License Plates [...]

For example, a rear license plate that is dirty while the rest of the car is clean, or the opposite, can indicate that the plate has been moved from front to back, or has been taken off another car. A smart car thief will use a plate registered to the same make and model of car, so a cursory registration check will show it belonging to a Chevy Camaro, or whatever. Similarly, license plates with scratches around the attachment bolts, or bright bolts on a dirty plate, can also evidence a recent change.

This is true[1]

Clearinghouse Clearing Out [...]

Ohio State officials have taken no position on the elimination of funding for the clearinghouse, calling it a matter for state policymakers. But last fall, when the university received notice that funding would not continue, a document prepared by the College of Education estimated that 82 percent of Ohio’s schools and districts have used the resource in some way. It said that the clearinghouse included more than 12,000 lessons in key subject areas, more than 950 reviews of online courses and more than 1,000 professional-development lessons for teachers.

Liberal Aha! [...]

“Liberals have a less structured and more flexible cognitive style, according to those studies. Our research indicates that cognitive differences in people with different political orientations also are apparent in a task that some consider to be convergent thinking: finding a single solution to a problem,” Salvi said.

Given previous findings relating political orientation with cognitive styles, the researchers hypothesized that liberals and conservatives would preferentially employ different processes when tackling problems that could be solved using either an analytical or insight approach.

“It’s not that there’s a different capacity to solve problems,” stressed Mark Beeman, senior author of the study and professor and chair of psychology at Northwestern. “It’s more about which processes people end up engaging in to solve the problem.” (Source)

Gender Price Gap and the New [...]

Women make only 80 cents on the dollar when selling identical new items on eBay, but make 97 cents on the dollar when selling identical old ones. These patterns hold even when sellers are selling an item with a set monetary value, such as a gift cards, and is seen when both women and men are buyers.

We analyze a unique and large data set containing all eBay auction transactions of most popular products by private sellers between the years 2009 and 2012. Women sellers received a smaller number of bids and lower final prices than did equally qualified men sellers of the exact same product. On average, women sellers received about 80 cents for every dollar a man received when selling the identical new product and 97 cents when selling the same used product. These findings held even after controlling for the sentiments that appear in the text of the sellers’ listings. Nonetheless, it is worth noting that this gap varied by the type of the product being sold. (Source)

From later in the article:

As shown in Fig. 2, the results of the experiment support the hypothesis that a lower value is assigned to products when sold by women than by men. One hundred sixteen people participated in the experiment; 59 were asked to report their evaluation of a $100 gift card sold by Alison, and 57 were asked to report their evaluation of a $100 gift card sold by Brad. The average value assigned to the gift card sold by a woman was $83.34, whereas the average value assigned to the same card sold by a man was $87.42 (P < 0.05; see table S10). Recall that similar differences in price between women and men sellers were found when we analyzed transactions of gift cards on eBay (Table 2). (The gender of the participants in the experiments did not affect the final price, nor did it affect the differences between the prices of gift cards sold by a woman and the prices of gift cards sold by a man.) (Source)

The “new” aspect of this finding is interesting, and deserves analysis. Why is the effect greater when the item is new? Is it possible that the price offered for old objects is mitigated by other concerns?

Article mentions this as connected to the Goldberg Paradigm experiment.

The Meaningless Where [...]

The increasing use of “preprints” in biology, a system whereby researchers submit their articles to online sites like bioRxiv months before they are printed in journals, has fostered the most recent round of debate. While preprints are currently synergistic with the publishing world, they may not be for long.

In any case, some researchers say, a détente between journals and preprint advocates may be short-lived. If university libraries drop their costly journal subscriptions in favor of free preprints, journals may well withdraw permission to use them, forcing biomedical researchers to make a harder choice. The preprint movement, some #ASAPbio advocates argue, may presage the need for a greater cultural shift than scientists have yet been willing to make: evaluating one another based on the substance of their papers, not where they are published. But it does, Michael Eisen, a longtime advocate for scientific publishing reform at the University of California, Berkeley, told his colleagues, help move this area of scientific publishing “into the 20th century.’’ (Source)

What would newer systems that “ignore the where” look like? Other online reputation systems hold clues. Prominent researchers could vote the research up or down. Networks of citation could provide additional insight into value. In short, reputation could become more decentralized and networked.

BioRxiv is the best known preprint source in biology. (Site)

The Guardian covered the issue of preprints last year, and has additional information. (Article)

Another way to share results more quickly is Lab Blogging. For an example, see Geospatial Ecology of Marine Megafauna Laboratory (Oregon State)

The Empathizing-Systemizing Theory, Social Abilities, and Mathematical Achievement in Children : Scientific Reports [...]

We found no evidence for a relationship between systemizing and math achievement after accounting for general cognitive and reading abilities. There was, however, a negative association between empathizing and calculation ability that was more pronounced in girls. This relationship was mediated by social abilities and not by autistic mannerisms, indicating that skills in picking up social cues may result in poorer math achievement. Social awareness was found to play a differential role in mediating the relationship between EQ-C and math achievement in girls. One interpretation is that the tendency toward social awareness makes girls, but not boys, susceptible to the social transmission of negative gender stereotypes in math. (Source)

Origin of QWERTY [...]

And so the beginning of the development of the QWERTY keyboard began. The design was not dictated by a sales department, or the limitations of the mechanics of the first typewriters. Instead, the design of the QWERTY keyboard was designed for Morse code, with significant regard given to putting the most frequently used letters on the home row.

The Morse code used in 19th century America was not the Morse code we know today. American Morse code is subtly different from the International Morse used today. American Morse encoded the letter ‘Y’ as (·· ··); two dits, a space, and two dits. ‘Z’ is encoded as (··· ·), and was commonly confused with ‘SE’, especially when appearing as the first letters of a word. Therefore, the ‘S’, ‘E’, and ‘Z’ keys should be close together. For the same reason, C – in Morse, (·· ·) – should be placed near both ‘S’, ‘I’, and ‘E’. There is a reason we don’t use American Morse anymore.

These efforts culminated in the typewriter that would grace the cover of the August 10, 1872 cover of Scientific American. For the first time, something resembling the modern QWERTY layout was available. It wasn’t perfect – ‘M’ wasn’t next to ‘N’, ‘C’ and ‘X’ were swapped. Characters, numerals, and punctuation were all over the keyboard, but this was what suited the telegraphers and became the basis of the first commercially successful typewriters. (Source)

Perceptions of Inequality and Dropout Rates [...]

More on inequality and outcomes.

But let’s be clear about what the authors mean by inequality. In this paper, since Kearney and Levine look at the effects on students from low socioeconomic backgrounds (measured by the educational level of their mothers), they focus on inequality between those at the bottom and those at the middle of the income distribution. And they are looking at inequality within states and metropolitan areas.

The authors find that states and metropolitan areas with higher levels of inequality have higher high school dropout rates. But the result is only significant for boys from low socioeconomic backgrounds. Boys in high-inequality states are 6 percentage points more likely to drop out than similar boys in low-inequality states. While their analysis doesn’t definitively prove causation, it does show that many other possible explanations don’t reduce the strength of the statistical relationship.

Kearney and Levine’s argument is about students’ perceptions, but they also present some interesting evidence on “returns” to an extra year of school. In low-inequality states, the returns appear to be about the same across all classes. But in high-inequality states, the returns are not only slightly lower but also less equally distributed. This is particularly interesting in light of other research on differential rates of return to education. Perhaps this is why boys from low socioeconomic backgrounds perceive high school to not be worth the effort. (Source)

Measuring the Technical Difficulty in Reusing Open Educational Resources with the ALMS Analysis Framework [...]

The Open Educational Resources (OER) movement was started roughly ten years old (Wiley & Gurell, 2009). Since that time thousands of resources have been produced. Though these resources have been used both for classroom development and for the autodidact, the development of OER was not without problems. Incompatibility between Creative Commons licenses has made revising and remixing two resources difficult, if not impossible (Linksvayer, 2006). Tools to help educators find appropriate educational resources have been necessary but are still nascent. Educators’ perceived quality issues have also hampered adoption (Wiley & Gurell, 2009). The result is that resources were only being minimally reused (Wiley, 2009). One possible reason observed for the limited reuse was the barrier of technology. Some resources were easier to view, revise and remix from a technical perspective than others. Hilton, Wiley, Stein, and Johnson (2010) created the ALMS analysis framework to assess the technical openness of an OER. Although the ALMS framework allowed for an assessment of OER, no pilot instrument was reported in the Hilton et al. (2010) article. The framework has not been tested because there is no known rubric with which measurement can occur. Consequently, Hilton et al.’s framework needed to be further developed and tested against a range of open educational resources. This dissertation examined the ALMS analysis, which was previously only a concept, in order to create a concrete framework with sufficient detail and documentation for comparisons to be made among OERs. The rubric was further refined through a Delphi study consisting of experts in the field of OER (n=5). A sample of OERs (n=27) rated by a small group (4) was conducted to determine inter-rater reliability. Intra-class correlation indicated moderate agreement (ICC(2,1) =.655, df=376, 95% CI [.609, .699]). Findings suggested that the degree of technical difficulty in reusing OERs can be measured in somewhat reliable manner. These findings may be insightful in developing policies and practices regarding OER development. (Source)

A Sustainable Model for OpenCourseWare Development (2010) [...]

The purposes of this study were to (a) determine the cost of converting BYU Independent Study’s e-learning courses into OpenCourseWare, (b) assess the impact of opening those courses on paid enrollment in the credit-bearing versions of the courses, and (c) use these data to judge whether or not an OpenCourseWare program could be financially self-sustaining over the long-term without grant monies or other subsidies. The findings strongly suggest that the BYU Independent Study model of publishing OpenCourseWare is financially self-sustaining, allowing the institution to provide a significant public good while generating new revenue and meeting its ongoing financial obligations. (Source)

Electronic versus traditional print textbooks: A comparison study on the influence of university students’ learning [...]

University students are increasingly choosing to purchase e-textbooks for their mobile devices as an alternative to traditional textbooks. This study examines the relationship between textbook format and 538 university students’ grades and perceived learning scores. Results demonstrate that there was no difference in cognitive learning and grades between the two groups, suggesting that the electronic textbook is as effective for learning as the traditional textbook. The mean scores indicated that students who chose e-textbooks for their education courses had significantly higher perceived affective learning and psychomotor learning than students who chose to use traditional print textbooks. (Source)

The Impact of Open Textbooks on Secondary Science Learning Outcomes (2014) [...]

This study uses a quantitative quasi-experimental design with propensity score matched groups and multiple regression to examine whether student learning was influenced by the adoption of open textbooks instead of traditional publisher-produced textbooks. Students who used open textbooks scored .65 points higher on end-of-year state standardized science tests than students using traditional textbooks when controlling for the effects of 10 student and teacher covariates. Further analysis revealed statistically significant positive gains for students using the open chemistry textbooks, with no significant difference in student scores for earth systems of physics courses. Although the effect size of the gains were relatively small, and not consistent across all textbooks, the finding that open textbooks can be as effective or even slightly more effective than their traditional counterparts has important considerations in terms of school district policy in a climate of finite educational funding. (Source)

State Efforts on Open Textbooks [...]

A state by state breakdown of OER education policy. pdf

Key takeaways from this report

  • Textbook costs are of great importance to state leaders because of the ties to postsecondary affordability.
  • Trending: Policies related to Open Educational Resources (OER) California, Florida, Minnesota, North Dakota and Washington have pursued successful initiatives to support the creation and use of OER through legislation.
  • Non-legislative approaches are gaining traction through pilot programs in Arizona, Minnesota, New York, Texas and Virginia.

UGA Saves $2 million Through OER [...]

The University of Georgia estimated that it has saved students $2 million through the adoption of open educational resources (OER) since 2013. According to Edward Watson, director of the university’s Center for Teaching and Learning, “Our approach has been to pursue large enrollment courses using expensive textbooks. This has enabled us to maximize savings for students.” (Source)

Data Is a Toxic Asset [...]

“Toxic Asset” was a term popularized during the great real estate crash of 2007/8 to describe assets such as high-risk mortgages for which there was no longer a functioning market. These assets add risk to a company’s bottom line with no potential benefit (and in the case of some high-risk mortgages, no exit). Recently, security expert Bruce Schneier has argued that data can be seen as a toxic asset.

What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous.

Saving it is dangerous because it’s highly personal. Location data reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with­ — including who we spend the night with.

Our Internet search data reveals what’s important to us, including our hopes, fears, desires and secrets. Communications data reveals who our intimates are, and what we talk about with them. I could go on. Our reading habits, or purchasing data, or data from sensors as diverse as cameras and fitness trackers: All of it can be intimate.

Saving it is dangerous because many people want it. Of course companies want it; that’s why they collect it in the first place. But governments want it, too. In the United States, the National Security Agency and FBI use secret deals, coercion, threats and legal compulsion to get at the data. Foreign governments just come in and steal it. When a company with personal data goes bankrupt, it’s one of the assets that gets sold.

Saving it is dangerous because it’s hard for companies to secure. For a lot of reasons, computer and network security is very difficult. Attackers have an inherent advantage over defenders, and a sufficiently skilled, funded and motivated attacker will always get in.

And saving it is dangerous because failing to secure it is damaging. It will reduce a company’s profits, reduce its market share, hurt its stock price, cause it public embarrassment, and­ — in some cases — ­result in expensive lawsuits and occasionally, criminal charges.

All this makes data a toxic asset, and it continues to be toxic as long as it sits in a company’s computers and networks. The data is vulnerable, and the company is vulnerable. It’s vulnerable to hackers and governments. It’s vulnerable to employee error. And when there’s a toxic data spill, millions of people can be affected. The 2015 Anthem Health data breach affected 80 million people. The 2013 Target Corp. breach affected 110 million. (Source)

Might we be headed back to what Grace Hopper called (derisively) Defensive Computing?

Engineering the Right [...]

Why are so many followers of radical strains of Islam engineers? A new work searches for (and finds) some answers. Roughly, the traits of engineers and extremists overlap.

What the book finds is that engineers are also significantly represented among far right groups, while humanities and social sciences graduates dominate the far left; and the authors argue that the ideology of Islamist radicals, stripped of its religious components, overlaps far more with that of extreme right-wingers than with that of radical left-wingers.

They suggest that the traits that make Islamism attractive to some engineers could also be what makes right-wing extremism attractive to other graduates.

“Political psychology research links a number of personality traits to right-wing attitudes: a propensity to be easily disgusted, a desire to draw rigid social boundaries and a preference for order, structure, and certainty known as ‘need for cognitive closure’,” Dr Hertog said.

“We find that, on average, indicators for these traits are stronger among engineers compared with graduates in general, while they are weaker among students of humanities and social science.” (Source)

Maybe engineers just like things tidy? See The Ideology of Disgust

Prime Conspiracy [...]

Among the first billion prime numbers, for instance, a prime ending in 9 is almost 65 percent more likely to be followed by a prime ending in 1 than another prime ending in 9. In a paper posted online today, Kannan Soundararajan and Robert Lemke Oliver of Stanford University present both numerical and theoretical evidence that prime numbers repel other would-be primes that end in the same digit, and have varied predilections for being followed by primes ending in the other possible final digits.

“We’ve been studying primes for a long time, and no one spotted this before,” said Andrew Granville, a number theorist at the University of Montreal and University College London. “It’s crazy.” (Source)

A Drone Constructed Bridge [...]

Effects of Caffiene on Sleep Inertia [...]

There is evidence that coffee helps people wake up, and evidence that long term it may exacerbate sleep inertia. Here we will examine those claims.

Supporting “Caffeine is beneficial” (Link)

Sleep Inertia [...]

Both subjective alertness and cognitive performance are low upon waking from sleep. Because First Hours, Best Hours proposes the most productive time of the day is right after “fully waking up” we explored how long it takes to fully wake up. The answer seems to vary, but two hours is not a bad rule of thumb.

Per Jewett et al, the waking curve is asymptotic, with a rapid improvement in both subjective alertness and cognitive throughput during the first hour and a half followed by a much slower improvement. (Source)

From a 1999 study: “Sleep inertia is often thought to be a fleeting phenomenon (Dinges 1990), but we found that subjective alertness and cognitive performance could be impaired for more than 2h after awakening, even in subjects who were not sleep deprived…” (Source)

A 2006 study shows a less asymptotic curve, with alertness for the day peaking at about two hours after waking. (Source)

Sleep inertia is more severe for individuals waking from non-REM sleep. Wake from dreaming if you can. 😉

Long term Effects of Caffeine on Sleep Inertia are a matter of some debate. It is unclear whether the short-term benefit of reducing sleep inertia with caffeine is cancelled out by long term dependence on the drug.


Sleep apnea fragments sleep and can exacerbate sleep inertia.

Subjective alertness is not the best measure of Sleep Inertia. However, people are much better at estimating alertness in the morning than at night. This inability to estimate cognitive impact of sleepiness is particularly pronounced among shift workers (Source)

Vox article on chronobiology. (Link)

Using the Side-cebo Effect [...]

Placebos work, even when patients know they are a sham. One approach to placebo treatments might be the “Side-cebo”.

The way the side-cebo would work is like this. A physician prescribes a beneficial medicine where compliance is somewhat low, such as a statin. When describing the side effects of the medicine, the physician also notes that studies have found that some people who believe that the statin has beneficial effects on eczema actually do get better skin, although this is “generally thought to be a placebo effect.”

I say this being in the weird position of having created a side-cebo for myself. I am supposed to take a statin every day, and I got really bad at it, because of course you don’t see the effects of it. This is ridiculous, because I know taking this single pill reduces all-cause mortality for people in my risk group by the same amount that increasing exercise by 20 minutes a day does. It’s a massive effect.

I also have sleep issues, waking up constantly at 3 a.m. an unable to get back to sleep. I used to take benzodiazapenes for it, but as my tolerance increased I became disturbed about the dangers of them (benzodiazapenes are second only to opiates in drug-related deaths). So I gave up on them and decided just to suffer.

One night I remembered to take my statin pill (which I honestly forget nine out of ten days) and I woke up, weirdly, at 6 a.m. instead of the normal 3 a.m. The next night I took my pill and it happened again. I got excited and got on the internet and looked — was it possible Pravastatin had some effect on sleep inertia? (Sleep inertia is the technical name for my issue — I wake up at 3 a.m. not sleepy, but fully awake, like I’ve had two cups of coffee).

I looked on the internet, and surprise, there was no such side effect. In fact with other statins (although not mine in particular) some people had had sleeplessness.

That’s when, weirdly, I decided not to care. I decided to believe that there was some undiscovered side effect of this pill that helped my sleep. I began to take my pill nightly, dramatically lowering my risk of death. I began to sleep better too — not perfectly, and I still have incidents, but much fewer.

I don’t really understand why it works when I know technically that I am lying to myself. I am a bit worried that writing this long article will somehow strip the side-cebo of its power. But it’s hard to argue with the results, for the first time in my life I am actually taking my statin medication regularly and I am sleeping better.

I wonder if this might have any medical applications, or if the ethics is just too murky on it. In any case, it’s something you can try in your own life — look at the activity you want to make habitual and the issue you want to solve. Pair the two in your mind, and see what happens.

Hugo’s Closet shows another approach to changing behavior, through adding friction to undesired actions.

Sleep Inertia is the sleepiness on waking that helps us fall back to sleep if necessary. I seem to lack it.

A Sound-Fueled Quantum Paradox [...]

Portable Palace is an entryway to quantum knowledge where sound art scientists Evelina Domnitch and Dimitry Gelfand challenge the limits of contemporary physics by the use of high intensity soundwaves. Their installations uncover phenomena of microscopic beauty such as sono-luminescence, sound-triggered levitation or light-induced migration of matter (they once made diamond dust defy gravity and levitate using laser light). Their experiments are scored by the likes of Carter Tutti and Raster-Noton‘s Carsten Nicolai, and they closely relate to the experimental “clique” formed around Phill Niblock’s annual solstice concerts. In the video above, an installation called Implosion Chamber causes oxygen bubbles to implode under the pressure of high frequency sound. The temperatures generated are as high as those found on the surface of the sun, creating shockwaves and luminescent patterns. (Source)

Buried with Beads [...]

One of the blue glass beads was found with a Bronze Age woman buried in Olby, Denmark, in a hollowed oak coffin wearing a sun disc, a smart string skirt decorated with tinkling, small bronze tubes (a decoration on the cords, placed at the front of the skirt), and an overarm bracelet made of amber beads. She had evidently been quite well to do. (Source)


Affordable Learning Georgia’s Top Open Textbooks [...]

A list maintained by Georgia of recommended textbooks for the top 100 undergraduate courses offered in Georgia. Frankly, it is a little out of date.

The following list includes the top-enrolled 100 USG courses with course numbers 1000-4000, as determined by the USG total enrollments for the 2014-2015 academic year. Sequential courses are grouped together when appropriate, and courses must be taught within at least three USG institutions to make the list. The suggested Open Educational Resources and affordable resources include those recommended/developed/adopted by ALG partners including OpenStax College and eCore as well as related results from MERLOT. We encourage your recommendations as well through our Recommendation form. (Source)

The Politics of Economic Inequality in America, A Course [...]

This course offers an overview of the ways in which social science can help us understand contemporary politics of income distribution and redistribution in America. More specifically, we will examines the nature, cause and consequences of the continuous growth in income inequality since the late 1970’s, with an eye to international comparison when helpful. The first part of
the course will focus on the determinants of market and disposable income inequality. In the second part, we will turn to the social, political and economic manifestations and consequences of income differences. (Source)

Transmission of Capital and Early Interaction [...]

From the St. Louis Fed page, a proposal that differences in the transmission of generational capital may be in part due to early childhood experiences.

In another study, we looked at the difference between blacks and whites in the intergenerational transmission of human capital.2 We focused on the roles of time and income spent in the early childhood years to see how they impacted educational outcomes, if at all. We found that the time that parents spend talking to and otherwise interacting with their children is the major reason for the disparity in educational outcomes between black and white children. For example, for black and white parents who spent the same amount of time interacting with their children, there is no black-white attainment gap. (Source)

The full paper is here.

Critical Making Materializing Politics of Design [...]

Carl DiSalvo writes about an approach to design he calls “critical making,” which aims to shed light on political qualities of an issue via participatory modes of making.

DiSalvo starts with discussion of Langdon Winner’s article “Do Artifacts Have Politics?,” which focuses on the Moses bridge design question (see references below). Though criticizing Winner’s harsh take on Moses, DiSalvo notes:

Regardless of whether or not we believe that artifacts have politics, and regardless of the intentionality, fungibility, or liquidity of those politics, artifacts are used for political ends: to express beliefs, desires, and attachments that have political significance. It is worthwhile to understand this, and perhaps even to facilitate this, because it gives us insight into those political commitments and how they are perceived in and conveyed through designed products.

He then discusses a project, the GrowBot Garden project, as an example of participatory design, which can help to avoid the separation of designer from user that sometimes results in design outcomes like the Moses bridges.

Participatory design offers us an alternate practice of design that, ideally, reconfigures the relationship between the user and the designer and results in more democratic conditions and outcomes of conceptualizing, planning, and producing products. Within a participatory design project a collaborative relationship is established between designers and communities of practice to foster and support the design of products for that community (DiSalvo, Clement, and Pipek 2012).

What we are witnessing in the BugBot is the inversion of the automated tomato harvester that Winner discusses. Bug Bot is a prototype of an artifact that not only will not force the small-scale farmers to adopt an industrial practice (pesticides), but moreover will allow the small-scale farmers to continue their practices that they themselves characterize as being contrary to those of industrial farming. The BugBot is inherently, and unabashedly, an artifact designed to do a certain set of politics

Perhaps one way to conceptualize this hybrid practice is a vector (one of many) for critical making—critical making as materializing the politics of design. It’s not that this process leads to new products, but rather that it leads to an elucidation of the politicized factors of practice expressed in prototype product form.


On the Moses project, see Policy Through Bridge Height

Changing perspectives on makerspaces [...]

From Campus Technology (Source)

In the past few years, the thinking around 3D printing has shifted, as the technology has become more or less intertwined with the idea of makerspaces and the maker movement. The “stuff” that students create in makerspaces, via 3D printing or other technologies, is now less important than the overall “maker” experience — interdisciplinary collaboration, hands-on problem-solving, digital literacy, entrepreneurship and more. As the 2016 NMC Horizon Report noted, “Regardless of what they encompass, the general purpose of makerspaces is to provide a place for users to engage in self-directed activities that spark their curiosity, help them identify passions, and build a habit of lifelong learning. By participating in hands-on design and construction in makerspaces, students engage in creative problem-solving and higher-order thinking.” (See our analysis of the full 2016 Horizon Report: Higher Education Edition in our March issue.)

Chronicling America (Library of Congress Newspaper Archive) [...]

Chronicling America contains public domain newspapers (up until 1922) that can be used by students to do primary historical research. It is run by the Library of Congress in concert with the National Endowment for the humanities. (Site)

Newspaper Detail

Its About page reads:

Chronicling America is a Website providing access to information about historic newspapers and select digitized newspaper pages, and is produced by the National Digital Newspaper Program (NDNP). NDNP, a partnership between the National Endowment for the Humanities (NEH) and the Library of Congress (LC), is a long-term effort to develop an Internet-based, searchable database of U.S. newspapers with descriptive information and select digitization of historic pages. Supported by NEH, this rich digital resource will be developed and permanently maintained at the Library of Congress. An NEH award program will fund the contribution of content from, eventually, all U.S. states and territories.

More information on program guidelines, participation, and technical information can be found at or (Source)

Is weak U.S. wage growth all because of who’s getting jobs? – Equitable Growth [...]

That’s not the only kind of compositional issue that might affect measured wage growth, however. In an economic letter for the San Francisco Fed, Mary Daly and Benjamin Pyle of the bank and Bart Hobijn of Arizona State University argue that other compositional effects need to be considered as well. First, there’s the Baby Boomers. The increase in retirement means that older workers, who are disproportionately higher-wage earners, are no longer included in the average wage, which will push down measured average wage growth. At the same time, many workers are moving from involuntary part-time work to full-time work. Because most of these new full-time jobs are relatively low-wage jobs, that will also pull down measured wage growth.

What should we look at then to understand wage growth? Daly, Hobijn, and Pyle argue for a measure that looks at the wage growth of workers who are continuously employed full-time. Luckily, the Federal Reserve Bank of Atlanta produces just such a metric. Note that this measure tracks a specific worker over time, so it’s not a cross-sectional measure like others that look at wage growth. The graph below shows annual wage growth according to the Atlanta Fed wage growth tracker for “Usually Full-time” workers, the Employment Cost Index, and the average hourly earnings for all private-sector workers from the Current Establishment Survey. (Source)

Benjamin Rush on Shaming [...]

Benjamin Rush, in one of Ben Franklin’s famous meetings, makes a surprisingly modern argument against shaming as punishment: if you take away an offender’s reputation, if you damage it permanently, what motivation do they have to rehabilitate themselves?

The debate over officially sanctioned shaming stretches back to the earliest years of the American Republic. In 1787, just four years after the end of the Revolutionary War, Benjamin Rush, a renowned physician who had signed the Declaration of Independence, made the case against public shaming at a meeting held in Benjamin Franklin’s Philadelphia home: “Crimes should be punished in private or not punished at all.” If an offender’s reputation was badly sullied, he reasoned, perpetrators would have no real incentive to rehabilitate themselves. “A man who has lost his character at a whipping post has nothing valuable left to lose in society,” Rush argued. Like many reformers, Rush believed a person could find penance and rehabilitate themselves behind prison walls rather than face public humiliation in the stocks or pillories of colonial America. His advocacy for more rational, systematic, and proportional punishment spurred the creation of the country’s first modern prison administration in 1790. By the 1830s, England and France had also abandoned the pillory and branding and moved toward the penitentiary—the seemingly enlightened, civilized, technocratic alternative. “Punishment had gradually ceased to be a spectacle,” Michel Foucault writes in Discipline & Punish. “The age of sobriety in punishment had begun.” (Source)

More recently, researcher Brené Brown has demonstrated Rush might have been right. See Why Shame Doesn’t Work

This quote from Rush may have been during one of Franklin’s Juntos

Revealing Van Gogh’s True Colors [...]

In an 1888 letter to his brother Theo, Vincent Van Gogh described one of his most iconic paintings, “The Bedroom”: “The walls are of a pale violet,” he wrote. “The floor—is of red tiles.” If that description strikes you as a little funny—‘aren’t those walls blue?’—you’re not alone. Last month, The Art Institute of Chicago brought together all three Van Gogh bedroom paintings (the artist was so enamoured of “The Bedroom,” he created two more versions of it) for an exhibition titled “Van Gogh’s Bedrooms.” The museum also revealed the scientific results of an investigation into this art history mystery: Just what color were those walls? Senior conservation scientist Francesca Casadio joins Ira to share the results of this international effort to reveal Van Gogh’s true colors—through chemistry. (Source)

Revealing Van Gogh’s True Colors [...]

In an 1888 letter to his brother Theo, Vincent Van Gogh described one of his most iconic paintings, “The Bedroom”: “The walls are of a pale violet,” he wrote. “The floor—is of red tiles.” If that description strikes you as a little funny—‘aren’t those walls blue?’—you’re not alone. Last month, The Art Institute of Chicago brought together all three Van Gogh bedroom paintings (the artist was so enamoured of “The Bedroom,” he created two more versions of it) for an exhibition titled “Van Gogh’s Bedrooms.” The museum also revealed the scientific results of an investigation into this art history mystery: Just what color were those walls? Senior conservation scientist Francesca Casadio joins Ira to share the results of this international effort to reveal Van Gogh’s true colors—through chemistry. (Source)