Help:: Getting Started / Day One [...]

Most people find that using Wikity to bookmark is a good place to start. The following video shows how you can bookmark with Wikity.

Note that in the video the bookmark says ‘Bkmrk’ but in recent versions says ‘Wik-it’. The editor has also been upgraded

Provenance and Forgery [...]

Most forged art is not very good or well-executed. It succeeds not because of the quality, but because of an invented provenance that turns off the critical mindset of the viewer:

A number of the forgers — more than half — if you just look at their forgeries in a vacuum, it’s surprising that they fooled anyone. Han van Meegeren’s Vermeers don’t look anything like Vermeers, but they managed to fool people. It is always the accompanying story, the invented provenance — which is essentially a confidence trick that manages to pass off the object — that really tricks the buyer. On further inspection, it’s always a surprise that the work itself could fool people. The way they do it is with a very compelling provenance.


Obvious connections here to assessing the quality of arguments or facts.

Also a connection to the fact that most hacks are the result of Social Engineering

More on the Meegeren Vermeers.

VO2 Max and Lifespan [...]

Not surprisingly, smoking had the greatest impact on lifespan. It substantially shortened lives.

But low aerobic capacity wasn’t far behind. The men in the group with the lowest VO2 max had a 21 percent higher risk of dying prematurely than those with middling aerobic capacity, and about a 42 percent higher risk of early death than the men who were the most fit.

Poor fitness turned out to be unhealthier even than high blood pressure or poor cholesterol profiles, the researchers found. Highly fit men with elevated blood pressure or relatively unhealthy cholesterol profiles tended to live longer than out-of-shape men with good blood pressure and cholesterol levels. (Source)

Math Anxiety Contagion [...]

A common impairment with lifelong consequences turns out to be highly contagious between parent and child, a new study shows.

The impairment? Math anxiety.

Means of transmission? Homework help.

Children of highly math-anxious parents learned less math and were more likely to develop math anxiety themselves, but only when their parents provided frequent help on math homework, according to a study of first- and second-graders, published in Psychological Science. (Source)

Four Stages of Solving [...]

Now, using an innovative combination of brain-imaging analyses, researchers have captured four fleeting stages of creative thinking in math. In a paper published in Psychological Science, a team led by John R. Anderson, a professor of psychology and computer science at Carnegie Mellon University, demonstrated a method for reconstructing how the brain moves from understanding a problem to solving it, including the time the brain spends in each stage.

The imaging analysis found four stages in all: encoding (downloading), planning (strategizing), solving (performing the math), and responding (typing out an answer). (Source)

Beer’s Exile [...]

Stafford Beer, creator of Project Cybersyn, went into a self-imposed exile after the Chilean coup that shattered his dream. Of course, he got to influence Eno and Bowie, so it wasn’t all bad.

Stafford Beer was deeply shaken by the 1973 coup, and dedicated his immediate post-Cybersyn life to helping his exiled Chilean colleagues. He separated from his wife, sold the fancy house in Surrey, and retired to a secluded cottage in rural Wales, with no running water and, for a long time, no phone line. He let his once carefully trimmed beard grow to Tolstoyan proportions. A Chilean scientist later claimed that Beer came to Chile a businessman and left a hippie. He gained a passionate following in some surprising circles. In November, 1975, Brian Eno struck up a correspondence with him. Eno got Beer’s books into the hands of his fellow-musicians David Byrne and David Bowie; Bowie put Beer’s “Brain of the Firm” on a list of his favorite books.

Isolated in his cottage, Beer did yoga, painted, wrote poetry, and, occasionally, consulted for clients like Warburtons, a popular British bakery. Management cybernetics flourished nonetheless: Malik, a respected consulting firm in Switzerland, has been applying Beer’s ideas for decades. In his later years, Beer tried to re-create Cybersyn in other countries—Uruguay, Venezuela, Canada—but was invariably foiled by local bureaucrats. In 1980, he wrote to Robert Mugabe, of Zimbabwe, to gauge his interest in creating “a national information network (operating with decentralized nodes using cheap microcomputers) to make the country more governable in every modality.” Mugabe, apparently, had no use for algedonic meters. (Source)


Anatoliy Ivanovich Kitov, the proposer of Russia’s first cybernetic network, faced similar circumstances after the military mobilized against him. See Economic Automated Management System

Portions of The Brain of the Firm are available here

 

Ulm School of Design [...]

Apple’s design aesthetic and the crazy Project Cybersyn share a common influence.

Today, one is as likely to hear about Project Cybersyn’s aesthetics as about its politics. The resemblance that the Operations Room—with its all-white, utilitarian surfaces and oversized buttons—bears to the Apple aesthetic is not entirely accidental. The room was designed by Gui Bonsiepe, an innovative German designer who studied and taught at the famed Ulm School of Design, in Germany, and industrial design associated with the Ulm School inspired Steve Jobs and the Apple designer Jonathan Ive. (Source)


See also Project Cybersyn

Beer’s Exile describes post-coup life for the creator of Cybersyn.

Cybernetics in the U.S. had a distinctly hippie flavor. See Techno-pastoralism

The Cussedness of Things [...]

As Eden Medina shows in “Cybernetic Revolutionaries,” her entertaining history of Project Cybersyn, Beer set out to solve an acute dilemma that Allende faced. How was he to nationalize hundreds of companies, reorient their production toward social needs, and replace the price system with central planning, all while fostering the worker participation that he had promised? Beer realized that the planning problems of business managers—how much inventory to hold, what production targets to adopt, how to redeploy idle equipment—were similar to those of central planners. Computers that merely enabled factory automation were of little use; what Beer called the “cussedness of things” required human involvement. It’s here that computers could help—flagging problems in need of immediate attention, say, or helping to simulate the long-term consequences of each decision. By analyzing troves of enterprise data, computers could warn managers of any “incipient instability.” In short, management cybernetics would allow for the reëngineering of socialism—the command-line economy. (Source)

Project Cybersyn [...]

Cybersyn aimed to provide market economy responsiveness to a socialist economy through cybernetics.

Project Cybersyn was a Chilean project from 1971–1973 during the presidency of Salvador Allende aimed at constructing a distributed decision support system to aid in the management of the national economy. The project consisted of four modules: an economic simulator, custom software to check factory performance, an operations room, and a national network of telex machines that were linked to one mainframe computer.[2]

Project Cybersyn was based on viable system model theory and a neural network approach to organizational design, and featured innovative technology for its time: it included a network of telex machines (Cybernet) in state-run enterprises that would transmit and receive information with the government in Santiago. Information from the field would be fed into statistical modeling software (Cyberstride) that would monitor production indicators (such as raw material supplies or high rates of worker absenteeism) in real time, and alert the workers in the first case, and in abnormal situations also the central government, if those parameters fell outside acceptable ranges. The information would also be input into economic simulation software (CHECO, for CHilean ECOnomic simulator) that the government could use to forecast the possible outcome of economic decisions. Finally, a sophisticated operations room (Opsroom) would provide a space where managers could see relevant economic data, formulate responses to emergencies, and transmit advice and directives to enterprises and factories in alarm situations by using the telex network. (Source)


Cybernetics was long a dream of communist planners. See
Cybernetic Red Scare, OGAS

Project Cybersyn is connected to Apple via the Ulm School of Design

The failure of the Chilean government led to Beer’s Exile

Networks Without ARPA [...]

Without ARPA’s funding, vision, and project management, there would have been less R&D in computer networking, but even so, there would have still been many pockets of work in the field. What would have been absent is the role of government as a neutral steward of the evolving network. So in my imagined scenario, information networks, instead of being designed by the users themselves, empowered by the open TCP/IP platform, are designed by the telecommunications industry. Now, instead of an Internet, there is a balkanized tapestry of many competing proprietary systems largely controlled by telco service providers.

Each country has its own system, and the browser and the World Wide Web never evolve as such. Telephones have built-in displays and log in automatically to the local service provider, where users immediately encounter an enormous tree of menus. Fees are charged by the bit and for selected interactions, so the service is relatively expensive and usage is sparse. With the low participation, regionalization, and tight control of information services, national brands do not emerge—no Google, Amazon, or Facebook.

Well, all this seems like a bad dream, but in truth such a scenario would have been very unlikely. My own belief is that something akin to today’s Internet would have been so compellingly attractive that it would have emerged from some alternative pathway through the swirling chaos of actions and interactions.

But we’ll never know. (Source)

A Better Status Email [...]

Then I got to Zynga in 2010. Now say what you want about Zynga (and much of it was true) but they were really good at some critical things that make an organization run well. One was the status report. All reports were sent to the entire management team, and I enjoyed reading them. Yes, you heard me right: I enjoyed reading them, even if when were 20 of them.

Why? because they had important information laid out in a digestible format. I used them to understand what I needed to do, and learn from what was going right. Please recall that Zynga, in the early days, grew faster than any company I’ve seen. I suspect the efficiency of communication was a big part of that. When I left Zynga, I started to consult. I adapted the status mail to suit the various companies I worked with, throwing in some tricks from Agile. Now I have a simple, solid format that works across any org, big or small. (Source)

Tech as “Cool Babysitter” [...]

All year, riding to meetings and home from drinks, I have been obsessed with figuring out why I hate the Seamless ads in the New York City subway. “Welcome to New York,” one reads. “The role of your mom will be played by us.” That’s quite a claim. Is Seamless going to tell me it’s not too late to go to law school? A second ad suggests that when I think I’m “angry” I might just be “hungry.” A third ad derides suburbanites, who are “dead” because they live in “Westchester.” The personality is half mom, half teenager: “cool babysitter.” Seamless will let me stay up late, eat Frosted Flakes for dinner, and watch an R-rated movie.

Every time I get an email from Seamless I brace myself for the contents, which include phrases like “deliciousness is in the works” and suggestions that I am ordering takeout because I am at a “roof party” or participating in a “fight club.” Seamless allows that I may even be immersed in an “important meeting,” a meeting so important that I am secretly interrupting it to customize a personal pizza. I picture a cool babysitter, Skylar, with his jean vest, telling me as he microwaves a pop-tart that “deliciousness is in the works,” his tone just grazing the surface of mockery, because I am a loser who must be babysat. (Source)

Creeping Playfulness [...]

In Lipstick Traces, an alternative map of 20th-century cultural history, Greil Marcus excerpts the 1977 shareholder report by Warner Communications, which noted that “entertainment has become a necessity.” This was an accurate statement, one that Marcus identified as a warning. Yet neither he nor the Warner executives could have prophesied its corollary, that we would become unable or unwilling to meet our needs without also being entertained. When we learn to expect playfulness from mundane tasks like ordering food or finding a pharmacy, or when we won’t go swimming without a Pokéchaperone, the result is a state of unsuspecting childlikeness, while adults wait in the woods to take their profits. My frustration with these apps only tells me I’m becoming the child they’re informing me I am. That’s the scary part, a dignity so fragile that a cartoon hamster breaks it (Source)

Aesthetic of Powerlessness [...]

Via Jesse Baron/Sianne Ngai, cuteness is an aesthetic of powerlessness, which may be used to defuse our deep suspicion of technology which is too powerful.

In her essay “The Cuteness of the Avant-Garde,” Sianne Ngai, a professor at Stanford, theorizes cuteness as an “aesthetic of powerlessness.” In the face of the overwhelming question — “What’s it for?” — a strain of avant-garde art responds by playing up its inutility, she argues. It magnifies its impotence until “it begins to look silly.” Ngai’s concerns, admittedly, weigh heavier than any app or Disney-movie soundtrack: she deals in her essay with Beckett, Adorno, and Stein. But one of her key observations, that we tend to read cuteness as evidence of “restricted agency” rather than as evidence of concealed and significant power, proves useful when looking at the visual language of apps. (Source)


Baron sees this a an input into Post-Dignity Design

Kawaii, a form of adult-accepted cuteness, arose in tech-saturated cultures.

In tech, we have a Preference for Female Voices

Facebook is Prioritizing Baby Pictures

 

Post-Dignity Design [...]

We’re in the middle of a decade of post-dignity design, whose dogma is cuteness. One explanation would be geopolitical: when the perception of instability is elevated, we seek the safety of naptime aesthetics. Reading about the mania for adult coloring books, a proof so absurd that the New York Times has published four articles about it, you find that some colorers can’t get to sleep without filling in a mandala on paper, while others need “a special time when we’re not allowed to talk about school or kids.” Adulthood stretches pointlessly out ahead of us, the planet is melting off its axis, you will never have a retirement account. Here’s a hamster. That would be the demand-side argument, where the consumer’s fears set the marketer’s tone. That would also be false. The real power lies on the supply side: Hammy wasn’t born in our fantasies, but in a Silicon Valley office. (Source)

Laws Can Spur Innovation [...]

Why did Adler’s automatic speed-control system fail? The technology seemed to work, although we can easily imagine its imperfections. What would happen if the device on the car malfunctioned, for example? And what would prevent drivers from simply disabling it? But the barriers to Adler’s system were not primarily technical. What ultimately doomed it was the lack of laws and governmental organizations to mandate the system’s use.

On 3 February 2014, the National Highway Traffic Safety Administration, the U.S. agency created in 1966 to oversee car safety standards, announced that it was considering requiring vehicle-to-vehicle (V2V) communication technologies on all cars sold in the United States. “This technology would improve safety by allowing vehicles to ‘talk’ to each other and ultimately avoid many crashes altogether,” the press release announcing the decision stated. Since then, the agency has worked steadily to promote the technology and the regulation that would make such systems a reality. Many automotive experts believe that the logical next steps after V2V communication will be smart roads and autonomous vehicles, such as Google’s self-driving cars.

Tech companies and carmakers are working hard to bring about this automotive future. And yet, when that future arrives, it will largely be because of federal laws, first passed in the 1960s, that controlled automotive design and highway construction. While today’s carmakers introduce new safety technologies—airbags, antilock brakes, electronic stability control, rearview backup cameras—into luxury lines as sellable features, typically federal action is needed to push such technologies into all new vehicles. Such laws did not exist in Adler’s day. He was indeed ahead of his time, but as his case so poignantly shows, the success of an innovation often depends as much on the quality of our institutions as it does on the quality of the technology itself. (Source)

Sonically Activated Traffic Signal [...]

Previous to pressure plates becoming the way to trigger stoplights, a simple proposal had a driver honk their horn to get the signal to turn.

He didn’t give up on trying to automate traffic safety, however. He continued to develop car safety devices, and in the late 1920s, he had minor success with a sonically actuated traffic signal. When a driver pulled up to a red light, honking the horn would make the light change. The system was intended for use at intersections where lightly traveled roads met major thoroughfares and where the traffic light needed to change only when a driver had to cross. (Source)

Advert-funded Speed Control in the 1920s [...]

But Adler had misunderstood the basic nature of the conference. Hoover eschewed federal regulation, preferring to let corporations and state and local governments take action voluntarily; he’d created the conference in this spirit. Even if he’d felt otherwise, no federal law or rule gave Hoover the power to regulate automotive design or highway construction. Adler’s invention required coordination among several levels of government and the car industry. Without an authority to mandate speed governors in automobiles and magnetic plates in roads, the system wouldn’t function. Federal regulations over automobiles and highways wouldn’t become law for another 40 years.

Adler was undeterred. By May 1925, he’d gathered a group of financial backers. On the heels of his successful December 1925 test, he continued to demonstrate the system for journalists, signal makers, police chiefs, state motor-vehicle administrators, and potential investors. He suggested that local authorities could defray the cost of installing the magnets by selling advertising space on the same signs that warned drivers of the danger points. He argued that the installation of speed governors could be made a requirement in annual vehicle inspections. (Source)

Post-Heroic Inventors [...]

Adler knew he’d have to spend considerable energy promoting his idea. He belonged to what the historian Eric Hintz has called the “post-heroic” generation of inventors, who followed on the heels of Thomas Edison and Alexander Graham Bell. Though the public still revered such engineering icons, being a lone inventor in the early 20th century was hardly glamorous. By then, large corporations were internalizing the act of invention by creating R&D labs. As organized research became the order of the day, independent inventors increasingly looked to license their patented creations, rather than attempting to manufacture the technology themselves. Corporations were naturally reluctant to license outside technologies—why else have an internal R&D lab?—so inventors had to publicize their technologies to have any hope of success. (Source)

Wig-wag Rail Signal [...]

Adler’s first major project was a new type of flashing signal for grade crossings. At the time, many cars didn’t bother to stop at railroad crossings, with the unsurprising result that about 1,500 people were dying in car-train collisions every year. The eventual solution was to eliminate grade crossings wherever possible by placing rail lines above or below the road. In the meantime, the American Railway Association (ARA), the trade group for the U.S. railroad industry, directed its member companies to install some sort of flashing light at such intersections.

The system that Adler designed was triggered automatically by the train as it approached the intersection. Two lights would flash in an alternating pattern, known as a wigwag, which mimicked the way a man swinging a lantern might warn oncoming cars. Adler’s flashing signal received the ARA’s endorsement, and more than 40 railroad companies adopted it. (Source)

Speed Control in 1925 [...]

On a cool December day in 1925, Charles Adler Jr. stood beside Falls Road, a state highway on Baltimore’s north side. He was there to test his latest invention: an electromagnetic apparatus that would automatically slow cars traveling at unsafe speeds. Adler had embedded magnetic plates in the road where it led into a precarious curve, and he was now waiting for a specially prepared car to drive over the magnets. The magnets would activate a speed governor connected to the vehicle’s engine, slowing it to 24 kilometers per hour.

Adler had developed this automatic speed-control system for railroad crossings, the scene of many deadly accidents at the time. But he soon came to imagine all sorts of applications for it: “Dangerous road intersections, streets on which schools are located, bad curves, and even steep down grades,” according to an article in the Baltimore News. (Source)

Prudence, not Caution [...]

Comedian Dana Carvey famously imitated George H.W. Bush with the line “Wouldn’t be prudent!” Prudence is commonly thought of as caution, but it has an older, richer meaning in ethical and political theory. A prudent man knows not only concepts of right action and conduct, but also has experience, sound judgment, and practical wisdom that he draws from to make the right decision in real-world situations. Prudence, under this definition, is one of the highest virtues. It is time for responsible Republicans to put nation before party, and endorse Hillary Clinton for president. (Source)

The Invisible Primary [...]

The invisible primary is a product of the presidential nominating reform that was instituted in 1972. The reform grew out of the bitter 1968 Democratic nominating race, which was fought against the backdrop of the Vietnam War. The anti-war challenges of Eugene McCarthy and Robert Kennedy drove President Lyndon Johnson from the race. Yet, party leaders, who controlled most of the convention delegates, picked Vice President Hubert Humphrey as the presidential nominee even though he had not entered a single primary. Insurgent Democrats were outraged, and after Humphrey narrowly lost the general election, they engineered a change in the nominating process. State parties were instructed to choose their convention delegates through either a primary election or a caucus open to all registered party voters.

The reform had obvious appeal. What could be more democratic than giving control of presidential nominations to the voters?  Reformers did not foresee the extent to which the new system would be brokered by the news media and failed to account for journalists’ limitations as a political intermediary. They are not in the business of sifting out candidates on the basis of their competency and platforms. They are in the business of finding good stories. Donald Trump was the mother lode. During the invisible primary, the press gave him what every candidate seeks — reams of coverage. In his case, even the media’s attacks were a boon. Many Republicans dislike the press enough that its attacks on one of their own are nearly a seal of approval. (Source)

Yankees and Cubs [...]

There’s only one problem: Hillary really was a fan of both the Cubs and the Yankees. And she really was a big baseball fan as a kid. Bob Somerby collects the evidence today. Here’s a childhood friend reminiscing about her in 1993, six years before New York was even a twinkle in Hillary’s eyes:

“We used to sit on the front porch and solve the world’s problems,” said Rick Ricketts, her neighbor and friend since they were 8. “She also knew all the players and stats, batting averages—Roger Maris, Mickey Mantle—everything about baseball.”

And this, in a 1994 story about a White House party for documentarian Ken Burns when he released “Baseball”:

“That was a great swing,” Burns told her. “Did you get some batting practice before the screening, just to warm up?” Mrs. Clinton, who as a kid was a “big-time” fan of the Chicago Cubs and New York Yankees and “understudied” Ernie Banks and Mickey Mantle, smiled.

How about that? Hillary was telling the truth the whole time. Hard to believe, isn’t it? (Source)

Positive Early Coverage [...]

> Of all the indicators of success in the invisible primary, media exposure is arguably the most important. Media exposure is essential if a candidate is to rise in the polls. Absent a high poll standing, or upward momentum, it’s difficult for a candidate to raise money, win endorsements, or even secure a spot in the pre-primary debates.

Some political scientists offer a different assessment of the invisible primary, arguing that high-level endorsements are the key to early success.[1] That’s been true in some cases, but endorsements tend to be a trailing indicator, the result of a calculated judgment by top party leaders of a candidate’s viability. Other analysts have placed money at the top.[2] Money is clearly important but its real value comes later in the process, when the campaign moves to Super Tuesday and the other multi-state contests where ad buys and field organization become critical.

In the early going, nothing is closer to pure gold than favorable free media exposure. It can boost a candidate’s poll standing and access to money and endorsements. Above all, it bestows credibility. New York Times columnist Russell Baker aptly described the press as the “Great Mentioner.”[3] The nominating campaigns of candidates who are ignored by the media are almost certainly futile, while the campaigns of those who receive close attention get a boost. Ever since 1972, when the nominating process was taken out of the hands of party bosses and given over to the voters in state primaries and caucuses, the press has performed the party’s traditional role of screening potential presidential nominees—deciding which ones are worthy of the voters’ attention. As Theodore H. White wrote in The Making of the President, 1972, “The power of the press is a primordial one. It determines what people will think and talk about—an authority that in other nations is reserved for tyrants, priests, parties, and mandarins.”[4] (Source)

Type of Coverage [...]

The report shows that during the year 2015, major news outlets covered Donald Trump in a way that was unusual given his low initial polling numbers—a high volume of media coverage preceded Trump’s rise in the polls. Trump’s coverage was positive in tone—he received far more “good press” than “bad press.” The volume and tone of the coverage helped propel Trump to the top of Republican polls.

The Democratic race in 2015 received less than half the coverage of the Republican race. Bernie Sanders’ campaign was largely ignored in the early months but, as it began to get coverage, it was overwhelmingly positive in tone. Sanders’ coverage in 2015 was the most favorable of any of the top candidates, Republican or Democratic. For her part, Hillary Clinton had by far the most negative coverage of any candidate. In 11 of the 12 months, her “bad news” outpaced her “good news,” usually by a wide margin, contributing to the increase in her unfavorable poll ratings in 2015. (Source)

The U.S. Is Light On Testing [...]

That last statement would shock many parents and activists who believe the opposite. But according to Schleicher’s reading of the data from more than 70 countries, most nations give their students more standardized tests than the United States does. He notes that the Netherlands, Belgium and Asian countries – all high-performing education systems – administer a lot more. “In many countries there is a test going on every month,” he added.

The data come from student and teacher surveys given alongside international exams known as the Program for International Student Assessment (PISA), given to 15-year-olds around the world. Along with the exam questions, they were asked how frequently they are given standardized tests, for example.

More than a third of 15-year-olds in the Netherlands said they took a standardized test at least once a month. In Israel, more than a fifth said they took a monthly standardized test. In the United States, only 2 percent of students said they took standardized tests this frequently, well below the OECD average of 8 percent. (Source)

A Little Less Precision [...]

Many advocates believe that adopting such an approach to assessment for all students could spur teaching that aims to encourage thinking and reasoning, rather than just passing a test.

“The bottom line for now is we need to broaden what counts in education, so I’m in favor of moving in this direction even if we lose a little bit of precision to get there,” says Brian Stecher, senior social scientist at the RAND Corporation. (Source)

Got Their Book [...]

She discovered Concepts of Biology, a textbook offered through OpenStax College, a nonprofit based at Rice University that seeks to make “open-source” textbooks available to students for free online. Fox decided to pilot the textbook this past spring for two online sections of the course that she teaches.

The effort evidently paid off.

Fox reports a 10 percent increase in successful completion of the course over the previous semester in both sections.  “And the reason was students actually got the book,” says Matt Reed, vice president for learning at Brookdale Community College, a Lincroft, New Jersey-based institution where 40 percent of the students are Pell Grant eligible. (Source)

Free College and the Forgotten Majority [...]

Free college has become the banner headline for Democrats in an effort to attract the energetic, debt-ridden millennials who flocked to the Bernie Sanders campaign.

During the Republican and Democratic conventions, The Hechinger Report will publish a new story each day, examining what the party proposals might mean for the future of education. Our staff reporters will provide education coverage from Cleveland and Philadelphia. READ MORE

But what about the 8 million adult college students struggling to complete a degree, and the millions of other adults who wish they could go to college but can’t afford it? Most current tuition assistance programs are aimed at recent high school graduates. Yet a majority (60 percent) of 25- to 64-year-olds do not hold at least an associate degree, and the numbers rise to 71 percent and 79 percent for African-Americans and Latinos, respectively. (Source)

Stern Love [...]

In the era of the self-packaged celebrity, where public image is carefully tailored on social media and authentic candor is rare, the interviews are an almost radical rebuttal to the patty-cake games and singalongs popularized by Jimmy Fallon on “The Tonight Show”

Mr. Stern believes his approach isn’t just better radio, but also better for whatever product his guest is promoting.
Continue reading the main story

“If someone comes in and the audience feels like ‘Oh my god, I love this person,’ they will want to see their movie,” he said. “It’s a strange thing to say to someone trained in P.R., but it’s the God’s honest truth. If someone has an hour to sit and talk about their life and at the end they say, ‘By the way, that’s what brought me to this movie, or to write this book,’ it’s such a powerful vehicle for promotion.” (Source)

Online Course User Experience: Standards Matter [...]

Research has shown that the design of online courses is an important factor for students learning and success in online courses. “Consistent course design is the most vital factor for students’ interaction and success in a course.”[1]

Most of our daily lives consist of a common understanding of the order of life and things around us. For those of us old enough to remember think back to when Microsoft Windows redesigned their menu system, Windows Vista. Multitudes of Windows users were frustrated and confused with the new design and didn’t know where to find “the old” tools, much less intuit that to shut down you had to click “Start.” Imagine the design changed every time you tapped onto your tablet or smart phone, requiring you to reorient to the new interface and relearn where your information went or how it was structured. Design standards help us understand and make sense of information and to use content without thinking of the context.

The same holds true for students in online courses. Courses that are designed to a standard online design allow the student to concentrate on the content, not on thinking about the context or hunting for the information. (Source)

Ephemeral Messaging vs. Disaggregating Identity [...]

This desire for these apps comes from the unnatural state of current online social communication. In real life, all communication happens within a context and people only have a limited identity in that context.

When I am teaching my class, I use my teacher qualifications. When I am reviewing a restaurant, I want to share the fact that I dine out frequently. When I am talking politics, it is relevant to know if I am liberal or conservative. What mainstream social networks lack is the ability to utilize only the relevant subset of your identity in online communications.

Outside of celebrities and other brands, there is little benefit from being fully identified in every conversation. If I am sharing tech gossip, it is much more useful for my audience to know that I am a high-tech CEO, rather than to know my actual name and entire conversation history.

When I am reviewing a recent Amazon delivery, my professional background is irrelevant. The ideal way to handle the retention of personally identifiable information is to simply not collect it in the first place.

Therefore, if aggregating identity is not useful, we are left wondering why all social networks put so much emphasis on this. It is because these networks are not in the business of enabling effective communication. Their actual business is in collecting users’ personal information and selling it to the highest bidder. Their advertisers are their customers, not the people who use their products (Source)

The Math of Moderates vs. Base Turnout [...]

Many Democrats believe in the “turnout myth” (full disclosure, I used to myself). The myth runs as follows — “triangulators” such as Clinton and Obama run on moderate platforms to capture the moderate vote, but in doing so they lose the excitement of the liberal wings of their party, and ultimately end up with less votes because of depressed base turnout.

As an example of how this could work, liberals point to Republicans, who are said to run more base-focused elections than Democrats and benefit from that. A short tour of the math, however, shows that Democrats can’t afford to sacrifice the middle for the edges.

Self-identified liberalism is growing in America, but still polls at a fraction of support for conservativism.

One way of looking at this: to get to 50% support Democrats have to capture a whopping 75% of the moderate vote, while Republicans only need to capture a small fraction (25%) of the moderate vote to win.

Could base turnout overcome this disadvantage? For Republicans, yes. For Democrats, no. A reasonable increase of turnout in an election might push participation up by 5% in that demographic. The “youth wave” in 2008, for example, was an increase of about 4 percentage points of the under 30 vote, breaking 66/31 for Obama.

In practice, this wave of turnout represented one percentage point in the final results — an increase from 17% of the total vote to 18% of the total vote, and was worth less than one percent of advantage to Obama.

In close elections, that percentage point can make a world of difference, but for Democrats it can only do that if going after that vote does not sacrifice moderate support and turnout. For Democrats, a five percent increase in liberal turnout can be offset by a mere 3.5% decrease in the moderate vote. For Republicans, the opposite is true: an increase in the conservative vote more than offsets losses in moderates, because there are more conservatives than moderates in America.

This is why, despite what we might want, the winning national strategy for the Democratic party has been to run a center-left campaign in a center-right nation.

That’s not to say it’s hopeless to get to further to the left: you’ll notice that slow drift up in self-identified liberals. That drift up largely comes from our party and elected Democrats making the case for liberalism. As we win elections and talk like Democrats we demonstrate that liberalism works. But we do that by getting in office and showing what good governance looks like. We build that narrative with each election we run, but particularly with each person we get in office to demonstrate liberalism in action. Eventually self-identified rates will be high enough that we can run much further to the left. But that time is not now.

 

The Fractured Left [...]

(Source)

This paper suggests that lower turnout among leftist citizens and the resulting partisan
advantage from mandatory voting could stem from heterogeneous ideology among the left’s
support. With diffuse support, a leftist candidate cannot adopt a political position that
caters to all its supporters. For example, if rightist citizens all agree on lower taxes and
smaller government while leftist citizens are split over protectionism, then we should expect
turnout to be lower among the left. Even though citizens have the same strength of prefer-
ences, some portion of leftist citizens will care less about the outcome of the election, since
no candidate is at their ideal point. (Source)

Growth of Liberal Identification [...]

Self-identified liberalism has grown steadily since 1992.

Almost all this rise is due to polarization in the Democratic Party: moderates becoming more (self-identified) liberal:

At the same time, it is not clear that this is all due to a shift of belief as much as a reassessment of the term.

Referendums and Democracy [...]

There is a popular view that the highest form of democracy is a referendum. We want to debunk that myth. Democracy is much more than consulting the people in “yes” or “no” decisions.The Brexit referendum, the Vancouver public transit referendum, the electoral reform referendum in B.C., the California tax referendums, the Quebec sovereignty-association referendums all appeared to be the essence of democracy. A closer look tells us that they violated many of its fundamental principles. (Source)

Neoliberalism and Competition [...]

One way of understanding neoliberalism, as Foucault has best highlighted, is as the extension of competitive principles into all walks of life, with the force of the state behind them. Sovereign power does not recede, and nor is it replaced by ‘governance’; it is reconfigured in such a way that society becomes a form of ‘game’, which produces winners and losers. My aim in The Limits of Neoliberalism is to understand some of the ways in which this comes about.

The Majority Illusion in Social Networks [...]

Social behaviors are often contagious, spreading through a population as individuals imitate the decisions and choices of others. A variety of global phenomena, from innovation adoption to the emergence of social norms and political movements, arise as a result of people following a simple local rule, such as copy what others are doing. However, individuals often lack global knowledge of the behaviors of others and must estimate them from the observations of their friends’ behaviors. In some cases, the structure of the underlying social network can dramatically skew an individual’s local observations, making a behavior appear far more common locally than it is globally. We trace the origins of this phenomenon, which we call “the majority illusion,” to the friendship paradox in social networks. As a result of this paradox, a behavior that is globally rare may be systematically overrepresented in the local neighborhoods of many people, i.e., among their friends. Thus, the “majority illusion” may facilitate the spread of social contagions in networks and also explain why systematic biases in social perceptions, for example, of risky behavior, arise. Using synthetic and real-world networks, we explore how the “majority illusion” depends on network structure and develop a statistical model to calculate its magnitude in a network. (Source)

Real Americans [...]

If you’re one of these “real Americans,” you’re in the majority in almost every respect. Most Americans are white, most are Christian, most don’t have college degrees, and most live in the South or Midwest Census Bureau regions. And yet, only about 1 in 5 voters meets all of these descriptions.

This helps to explain what seems like a paradox. “Real Americans” overwhelmingly voted Republican in the 2012 election. The differences might be even more pronounced this year. And yet, President Obama won re-election four years ago. And Clinton leads Donald Trump in the polls, albeit narrowly. (Source)

Trumping Truth [...]

In a study conducted in October,1 researchers presented 507 self-identified Republicans and 986 self-identified Democrats with actual things that Trump had said — some of which were true and some of which were false. The researchers might explain, for instance, that “Trump said that the MMR vaccine causes autism,” or they would simply present the assertion that “The MMR vaccine causes autism.” Then they asked people, “How much do you believe this statement?”

“If we told participants that it was Trump that said the misinformation, Republicans were much more likely to believe it and Democrats were much less likely to believe it,” said Briony Swire2, a Ph.D. candidate in cognitive psychology at the University of Western Australia, who conducted the study with colleagues at MIT and the University of Bristol. On a 10-point scale, Republicans rated the misinformation 4.8 and Democrats 3.2 when it was attributed to Trump. A similar partisan split appeared with the true statements — Republicans were more likely than Democrats to believe factual statements when told that Trump had said them. “People relate to the world with their partisan lens,” Swire said. (Source)

Exploring the Physical Web [...]

The Physical Web is still pretty new, but the basic idea is that the Physical Web lets you broadcast any URL to the people around you. Awesome, right? The Physical Web lets you anchor URLs to physical places by way of a BLE beacon, effectively allowing you to “park” a webpage, link to a file, etc., wherever you want. It’s kind of like putting your own “Pokémon Go” wherever you want for people to find — except without making them surrender all their data 😉 (Source)

More Debt, Better Outcomes [...]

The report later cites data showing that Americans with high-debt balances are more likely to own a home than those with smaller balances. Borrowers with high-debt balances typically attended graduate school and earn more than those with just a bachelor’s degree. Borrowers who are delinquent on their student debt—a large share of which owe small balances– are the least likely to buy a home, even compared to those with no student debt at all.

“It is education, not student debt, that drives the persistent differences in homeownership,” the report states. (Source)

There Is No Student Debt Bubble [...]

Similarly, the White House also strongly refutes any comparison between the housing market bubble and student debt. “Student debt is less likely to make a recession more severe or slow an expansion in the way that mortgage debt may have,” the paper says.

For that, it cites several factors.

For one, student debt is still low as a share of Americans’ disposable income. In 2015, student debt made up 9% of aggregate income, up from 3% in 2003. By comparison, mortgage debt at its peak in 2007 comprised 84% of aggregate income, up 25 percentage points in five years, the report states.  Mortgage debt dropped back down to 61% in 2015.

Secondly, the White House says, “student loan debt is an investment in human capital that typically pays off through higher lifetime earnings and increase productivity.” (Source)

Defaulters Owe Less [...]

To highlight this divide, the White House points out that borrowers owing the smallest balances are the ones most likely to default. Take the cohort of borrowers who were first required to start making payments on their debt in 2011. Two-thirds of those who defaulted in the following three years owed less than $10,000, the White House says. More than a third of defaulters, 35%, owed less than $5,000. These borrowers owe little because they typically attended college for one or two years and then dropped out. (Source)

Online Attention as Inferior Good [...]

From The Empirical Economics of Online Attention (2016):

We find that higher income households spend less total time online per week. Households making $25,000-$35,000 a year spend ninety-two more minutes a week online than households making$100,000 or more a year in income, and differences vary monotonically over intermediate income levels. Relatedly, we also find that the amount of time on the home device only slightly changes with increases in the number of available web sites and other devices – it slightly declines between 2008 and 2013 – despite large increases in online activity via smartphones and tablets over this time. Finally, the monotonic negative relationship between income and total time suggests online attention is an inferior good, and we find that this relationship remains stable, exhibiting a similar slope of sensitivity to income. We call this property persistent attention inferiority. There is a generally similar decline in total time across all income groups, which is consistent with a simple hypothesis that the allocation of time online at a personal computer declines in response to the introduction of new devices. (Source)

Poverty of Attention [...]

“…[I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” (Simon, 1971).

Stewardess Requirements [...]

The 1960s were the golden age of air travel, and working as a flight attendant was one of the most glamorous jobs available for a woman. The criteria were strict: applicants were held to high standards of beauty, and required to be single, under 140 lbs, and between the ages of 20 and 26.

Pan Am:

Of course, the stewardesses were also required to fit nicely into their iconic blue uniforms. They were subjected to regular girdle checks, and even monthly weigh-ins! Anyone who went over the maximum weight was suspended from work without pay until they lost a few pounds. The criteria for appearance were strict, but the stewardesses were more than pretty faces and elegant white gloves. Pan Am was known for its phenomenal on-board food, and the girls were expected to learn silver service and prepare seven courses of French cuisine from scratch while in the air.(Source)

From Gallery to Gauntlet [...]

Elevators in the infamous Pruitt-Igoe development stopped at the communal spaces on every third floor of the building, on the idea that forcing people to pass through the communal areas to get to their apartments would increase community. In practice this did not work out well. (Source)

Undersized elevators that “skip-stopped” on every third floor increased the personal risk to women and children (especially when things started deteriorating after 1957) by forcing them to reach their apartments through long corridors and narrow staircases. The elevator stop “galleries” themselves—intended to support community association—came to be described by residents as “gauntlets.”

Defensible Space [...]

Movement begun in the 1970s to design architecture in a way that allowed residents to defend common spaces from outsiders. Advanced by Oscar Newman, the idea was a reaction to the Le Corbusier inspired designs of public housing that failed so horribly in the 1960s.

The St. Louis Pruitt-Igoe development provides an example. Newman describes the dream and the reality:

[By] most eminent architects [it] was hailed as the new enlightenment. It followed the planning principles of Le Corbusier and the International Congress of Modern Architects. Even though the density was not very high (50 units to the acre), residents were raised into the air in 11-story buildings. The idea was to keep the grounds and the first floor free for community activity. “A river of trees” was to flow under the buildings. Each building was given communal corridors on every third floor to house a housing project laundry, a communal room, and a garbage room that contained a garbage chute. (Source)

Reality, however, did not live up to expectations:

Occupied by single-parent, welfare families, the design proved a disaster. Because all the grounds were common and disassociated from the units, residents could not identify with them. The areas
proved unsafe. The river of trees soon became a sewer of glass and garbage. The mailboxes on the ground floor were vandalized. The corridors, lobbies, elevators, and stairs were dangerous places to walk. They became covered with graffiti and littered with garbage and human waste.

Why did this design, which worked well for middle class developments, fail with a lower-class set of residents? Newman points out that middle class people pay a number of people to “defend” internal spaces — a doorman, for instance, a security guard, a common area supervisor. Without the financial support for such positions, these large open spaces were not “defensible” from attack or abuse.

Newman develops an architecture that links most common areas to a few residents at most, requiring passage through private spaces to get to it.

He describes the issues in this documentary:


Related idea: Hostile Architecture

Unintended effects of Pruitt-Igoe led to safety issues. See From Gallery to Gauntlet

There are perhaps some parallels with education here, when we ask why certain models of education don’t work as well in underfunded schools — what paid support is missing?

The Writer’s Bench [...]

Throughout the history of the New York City subway’s aerosol art movement there were meeting places for writers known as writer’s corners or writer’s benches. The majority of these meeting places were in the subway system.

The last active location was the 149th Street Grand Concourse subway station in The Bronx, on the 2 and 5 IRT lines. It was active from the 1970s until the decline of subway painting in the late 1980s.

Writers from all over the city congregated at a bench located at the back of the uptown platform. They came to meet, make plans, sign black books and settle disputes. The main activity was watching art on the passing trains (known as benching). The writers would admire and criticize the latest paintings.
This station was an ideal location for a writer’s bench for several reasons. It was a station where the 2 and 5 lines converged. The 2 and 5 lines featured some of the most artistic works in the city. The fact that many lay-ups and train yards for the 2s and 5s were located in both the Bronx and Brooklyn made creativity on these lines extremely competitive. An overpass connecting the uptown and downtown platforms was an ideal vantage point from which to view the passing trains.

Since paintings rarely if ever run on trains today, this bench is no longer frequented by writers. Old school New York writers occasionally visit the site for the sake of nostalgia. Writers post 1989 and writers from outside New York City occasionally visit it as a historical location. (Source)

Love Bench [...]

The Love Bench pulls people together.

Last month, the East Japan Railway Company (JR East) installed a single pair of heart-shaped hand straps on one of its lines in hopes of sparking romance among their passengers. However, with Valentine’s Day behind us it seems they aren’t through playing matchmaker.

This time JR Shikoku is strapping on some cupid wings by installing ”Love Love Benches” in two of their stations. The seat of the bench slopes inwards so that no matter how two people sit on it they will quickly be brought together thanks the marvel of gravity.

Nurture, Culture, and Notes [...]

The study is one of the first to put an age-old argument to the test. Some scientists believe that the way people respond to music has a biological basis, because pitches that people often like have particular interval ratios. They argue that this would trump any cultural shaping of musical preferences, effectively making them a universal phenomenon.Ethnomusicologists and music composers, by contrast, think that such preferences are more a product of one’s culture. If a person’s upbringing shapes their preferences, then they are not a universal phenomenon. (Source)

Planned Obsolescence of Light Bulbs [...]

Planned obsolescence was built into light bulbs very early.

The thousand-hour life span of the modern incandescent dates to 1924, when representatives from the world’s largest lighting companies—including such familiar names as Philips, Osram, and General Electric (which took over Shelby Electric circa 1912)—met in Switzerland to form Phoebus, arguably the first cartel with global reach. The bulbs’ life spans had by then increased to the point that they were causing what one senior member of the group described as a “mire” in sales turnover. And so, one of its priorities was to depress lamp life, to a thousand-hour standard. The effort is today considered one of the earliest examples of planned obsolescence at an industrial scale.

When the new bulbs started coming out, Phoebus members rationalized the shorter design life as an effort to establish a quality standard of brighter and more energy-efficient bulbs. But Markus Krajewski, a media-studies professor at the University of Basel, in Switzerland, who has researched Phoebus’s records, told me that the only significant technical innovation in the new bulbs was the precipitous drop in operating life. “It was the explicit aim of the cartel to reduce the life span of the lamps in order to increase sales,” he said. “Economics, not physics.”

Phoebus is easily cast as a conspiracy of big-business evildoers. It even makes an appearance as such in Thomas Pynchon’s weird-lit classic “Gravity’s Rainbow”: the shadowy organization sends an agent in asbestos gloves and seven-inch heels to seize diehard bulbs as they approach their thousandth hour of service. (“Phoebus discovered—one of the great undiscovered discoveries of our time—that consumers need to feel a sense of sin,” Pynchon writes.) In its day, however, the shift to planned obsolescence was in keeping with the views of a growing body of economists and businesspeople who felt that, unless you dealt in coffins, it was bad business and unsound economics to sell a person any product only once. By the late nineteen-twenties, the repetitive-sales model had become so popular that Paul Mazur, a partner at Lehman Brothers, declared obsolescence the “new god” of the American business élite. (Source)

Medical Pot Laws and Opioid Abuse [...]

They found that, in the 17 states with a medical-marijuana law in place by 2013, prescriptions for painkillers and other classes of drugs fell sharply compared with states that did not have a medical-marijuana law. The drops were quite significant: In medical-marijuana states, the average doctor prescribed 265 fewer doses of antidepressants each year, 486 fewer doses of seizure medication, 541 fewer anti-nausea doses and 562 fewer doses of anti-anxiety medication. (Source)

First Packet Failure [...]

“This is part of a series of bugs that I have known and loved… What you won’t read is that this packet failed and it failed, it crashed one of the systems, and the reason it failed was one of the systems was expecting carriage-return line feed and the other system was expecting EOL as a line terminator. So this bug has been with us since the very first Internet packet and it still bugs lots of systems today.” (Source)

Hyperuniformity [...]

Torquato had been studying this hidden order since the early 2000s, when he dubbed it “hyperuniformity.” (This term has largely won out over “superhomogeneity,” coined around the same time by Joel Lebowitz of Rutgers University.) Since then, it has turned up in a rapidly expanding family of systems. Beyond bird eyes, hyperuniformity is found in materials called quasicrystals, as well as in mathematical matrices full of random numbers, the large-scale structure of the universe, quantum ensembles, and soft-matter systems like emulsions and colloids.

Scientists are nearly always taken by surprise when it pops up in new places, as if playing whack-a-mole with the universe. They are still searching for a unifying concept underlying these occurrences. In the process, they’ve uncovered novel properties of hyperuniform materials that could prove technologically useful.

From a mathematical standpoint, “the more you study it, the more elegant and conceptually compelling it seems,” said Henry Cohn, a mathematician and packing expert at Microsoft Research New England, referring to hyperuniformity. “On the other hand, what surprises me about it is the potential breadth of its applications.” (Source)

Misuse of CC-licensed Photos [...]

The Wikimedia blog post pointed out that this isn’t the first time that CC-licensed photos have been misused in this way. In 2013, Wikimedian Sage Ross found that his photos of Aaron Swartz were being used in news articles around the world: “Of the 42 news articles he examined, only six followed the licence at least in part. Another nine attributed him but not the licences, nine attributed them to a for-profit photo agency, and a final eighteen provided no attribution at all.” (Source)

Technical Expertise Is Not Major Factor in Startup Success [...]

A quick skim of CB Insights’ collection of 150+ startup post-mortems reveals that only ~5% of post-mortems referenced a lack of technical ability/execution. Most startup failures were caused by building the wrong product, or lacking strong sales skills, or not having a viable business model. The presence or absence of amazing engineers was rarely a factor.

Another way to analyze the value of engineering is to look at highly-valued private companies: Uber, Airbnb, Snapchat, Pinterest, etc. These are certainly challenging products to work on today because most software is hard at a large enough scale. However, it’s doubtful that any of these companies needed 10x engineers for their initial launches. 3x or 2x or maybe even 1x engineers would have been sufficient.

There are, of course, some companies with real technical risk: SpaceX, Zoox, Rigetti Quantum Computing, etc. But for a typical consumer app or SaaS tool, technical risk is low enough to be ignored. (Source)

Premature Optimization [...]

The ability to scale with success is important, but designing products for high scalability from Day 1 is usually a mistake. (“Premature optimization is the root of all evil.” — Donald Knuth) (Source)

Harmony Explained [...]

Most music theory books are like medieval medical textbooks: they contain unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases. How does music, in particular harmony, actually work, presented as a real, scientific theory of music?
The core to our approach is to consider not only the Physical phenomena of nature but also the Computational phenomena of any machine that must make sense of sound, such as the human brain. In particular we derive the following three fundamental phenomena of music:

  • the Major Scale,
  • the Standard Chord Dictionary, and
  • the difference in feeling between the Major and Minor Triads.
    While the Major Scale has been independently derived before by others in a similar manner [Helmholtz1863, Birkhoff1933], I believe the derivation of the Standard Chord Dictionary as well as the difference in feeling between the Major and Minor Triads to be original.
    We show to be incomplete the theory of the heretofore agreed-upon authority on this subject, 19th-century Physicist Hermann Helmholtz [Helmholtz1863]: he says notes are in “concord” because the sound playing them together is “less worse” than that of some other notes. But note that, in this theory, more notes can only penalize, some merely less than others, and so the most harmonious sound should be a single note by itself(!) and harmony would not exist as a phenomenon of music at all.
    I intend this article to be satisfying to scientists as an original contribution to science and art, yet I also intend it to be approachable by musicians and other curious members of the general public who may have long wondered at the curious properties of tonal music and been frustrated by the lack of satisfying, readable exposition on the subject. Therefore I have written in a deliberately plain and conversational style, avoiding unnecessarily formal language. (Source)

Wyoming’s Suicide Problem Is More Than a White Male Problem [...]

“I kept hearing, as I talked to people out here, that it’s all about white men in rural areas, middle-age white men. And it’s true that, statistically, that’s the group most likely to commit suicide,” Pepper said. “But when you start looking at the data, this region of the country leads for men, for women, across all racial groups, across all ethnicities. It’s not just a rural problem, whatever it is is also in urban areas, as well as everywhere in between and across all age groups.”

Pepper thinks that means experts need to look beyond just white men and see what regional factors might explain the problem. Rurality is certainly one, as is the isolation that goes with it; Wyoming is nearly twice the size of New York state but with only 586,000 residents has less than 1/30th the population. She also notes that self-reported depression isn’t higher in the mountains, but alcohol and drug abuse are. Unemployment and low income are associated with high risk of suicide; Wyoming’s employment rate withstood much of the recession of the 2000s, but a recent drop in the prices of oil and minerals, which form the foundation of the state’s economy, is starting to wreak havoc on the finances of many families. There’s also the high rate of gun ownership. (Source)

Decline of Stomach Cancer [...]

Until the late 1930s, stomach cancer was the No. 1 cause of cancer deaths in the United States. Now just 1.8 percent of American cancer deaths are the result of it. No one really knows why the disease has faded — perhaps it is because people stopped eating so much food that was preserved by smoking or salting. Or maybe it was because so many people took antibiotics that H. pylori, the bacteria that can cause stomach cancer, have been squelched. (Source)

World Science U [...]

Open question based site that gives videos on science.

Immerse yourself
in the world of science
Education for everyone at all levels of interest and knowledge. Learn More. (Source)

Silence Is in the Contrast [...]

n 2006, Bernardi’s paper on the physiological effects of silence was the most-downloaded research in the journal Heart. One of his key findings—that silence is heightened by contrasts—is reinforced by neurological research. In 2010, Michael Wehr, who studies sensory processing in the brain at the University of Oregon, observed the brains of mice during short bursts of sound. The onset of a sound prompts a specialized network of neurons in the auditory cortex to light up. But when sounds continue in a relatively constant manner, the neurons largely stop reacting. “What the neurons really do is signal whenever there’s a change,” Wehr says.

The sudden onset of silence is a type of change too, and this fact led Wehr to a surprise. Before his 2010 study, scientists knew that the brain reacts to the start of silences. (This ability helps us react to dangers, for example, or distinguish words in a sentence.) But Wehr’s research extended those findings by showing that, remarkably, the auditory cortex has a separate network of neurons that fire when silence begins. “When a sound suddenly stops, that’s an event just as surely as when a sound starts.” (Source)

Noise Kills [...]

Surprisingly, recent research supports some of Nightingale’s zealous claims. In the mid 20th century, epidemiologists discovered correlations between high blood pressure and chronic noise sources like highways and airports. Later research seemed to link noise to increased rates of sleep loss, heart disease, and tinnitus. (It’s this line of research that hatched the 1960s-era notion of “noise pollution,” a name that implicitly refashions transitory noises as toxic and long-lasting.)

Studies of human physiology help explain how an invisible phenomenon can have such a pronounced physical effect. Sound waves vibrate the bones of the ear, which transmit movement to the snail-shaped cochlea. The cochlea converts physical vibrations into electrical signals that the brain receives. The body reacts immediately and powerfully to these signals, even in the middle of deep sleep. Neurophysiological research suggests that noises first activate the amygdalae, clusters of neurons located in the temporal lobes of the brain, associated with memory formation and emotion. The activation prompts an immediate release of stress hormones like cortisol. People who live in consistently loud environments often experience chronically elevated levels of stress hormones.

Just as the whooshing of a hundred individual cars accumulates into an irritating wall of background noise, the physical effects of noise add up. In 2011, the World Health Organization tried to quantify its health burden in Europe. It concluded that the 340 million residents of western Europe—roughly the same population as that of the United States—annually lost a million years of healthy life because of noise. It even argued that 3,000 heart disease deaths were, at their root, the result of excessive noise. (Source)

Nightingale’s Noise [...]

Dislike of noise has produced some of history’s most eager advocates of silence, as Schwartz explains in his book Making Noise: From Babel to the Big Bang and Beyond. In 1859, the British nurse and social reformer Florence Nightingale wrote, “Unnecessary noise is the most cruel absence of care that can be inflicted on sick or well.” Every careless clatter or banal bit of banter, Nightingale argued, can be a source of alarm, distress, and loss of sleep for recovering patients. She even quoted a lecture that identified “sudden noises” as a cause of death among sick children. (Source)

Utah Suicide [...]

“Last year we were over 600,” said Dr. Todd Grey, the chief medical examiner for Utah. “We’re certainly on track for being over 600 this year. So that means every day, on average, we’re going to see at least one to possibly two suicides.”

SEE ALSO: Most suicides by veterans are by those over the age of 50

A new report shows the youth suicide rate in Utah has nearly tripled since 2007. It is now the leading cause of death among 10 to 17-year-olds in Utah.

“Look at the numbers here folks, these are big numbers,” Grey said. (Source)

Messy In-Betweeness [...]

SF: The tragedy of it was: If only my father—if only all of us—could be ourselves in our own messy in-between category-ness. My father was so much more interesting in an ambiguous state, which she didn’t reach until the last three or four years of her life. Also, she talked to me so much more, saying, “Now that I’m a woman I feel I can communicate more. As a man I felt I couldn’t communicate.” One of the things that gave her real relief was not feeling isolated at the end of her life. The other aspect of how my father found, I wouldn’t say peace, because no one fully changes—toward the end of her life, my father was willing to look into her own past. She was talking a lot more about being Jewish and her family and the history that she had spent so much time covering up. I think that was freeing for her. To stop trying to put on a mask and just begin to confront all the circumstances and historical conditions that shaped who she became. (Source)

Roust and Balancer [...]

Apps attempt to “fix” your filter bubble. (It won’t work).

We’re not quite there yet, the experts reassure me — and steps could be taken away from that ledge. A social network called Roust, currently in beta, promises to gather an ideologically diverse crowd to “discuss tough topics like politics, religion and social matters.” Opposite the content-blockers of the Internet, extensions like “Balancer” analyze your browsing history and tell you when it skews liberal or conservative. (Source)

Algorithms Don’t Polarize People, People Do [...]

Facebook claims that individual choice limits exposure to cross-cutting content more than algorithms.

“Individual choice has a larger role in limiting exposure to ideologically cross cutting content [than the News Feed algorithm],” a recent study by Facebook’s own data team ruled. “We show that the composition of our social networks is the most important factor limiting the mix of content encountered in social media.”

Chart showing increasing polarization dating back to the 1970s

In other words, the thing most polarizing people online is people themselves — a phenomenon that the latest string of anti-Trump apps, browser extensions and add-ons would not appear to help. On top of the unfriending site, there’s an iPhone app called Trump Trump that will eliminate the candidate’s name from the websites you’re browsing, as if he didn’t exist. Remove Donald Trump from Facebook will, as its name suggests, scrub the candidate from your News Feed. A mountain of Chrome extensions will replace Trump’s name or picture with a series of other things: “Voldemort,” “your drunk uncle at Thanksgiving” — even the smiling poop emoji. (Source)


Nailing down the ethical responsibilities of algorithms is a part of Algorithmic Accountability

Degree Assortativity characterizes most human networks, and is resistant to inflow of outside ideas.

Pockets of Polarization [...]

On Twitter, for instance, people who tweet about politics tend to tweet primarily at and with people who belong to the same party, creating what one team of researchers called “pockets of political polarization.” (A 2014 study suggested such pockets could become less polarized as they tweeted with other groups, but the jury’s still out on that one.) On Facebook, the average user agrees with the politics of more than three-fourths of her friends. The social network has found that affinity is more pronounced among liberals than it is among conservatives; it’s also found that, because most users signal to the algorithm (through their clicks) that they’re more interested in stories that agree with their politics, the algorithm tends to surface more of that agreeable, re-affirmative content. (Source)

Newness and Retweeting [...]

The researchers made a few other telling observations, as well: Most clicks to news stories, they found, were made on links shared by regular Twitter users, and not the media organization itself. The links that users clicked were much older than we generally assume — some had been published for several days, in fact (Source)

Readerless Sharing [...]

Now, as if it needed further proof, the satirical headline’s been validated once again: According to a new study by computer scientists at Columbia University and the French National Institute, 59 percent of links shared on social media have never actually been clicked: In other words, most people appear to retweet news without ever reading it. (Source)

Breaking and Entering [...]

Empathy, humility, compassion, conscience: These are the key ingredients missing in the pursuit of innovation, Ms. Helfand argues, and in her book she explores design, and by extension innovation, as an intrinsically human discipline — albeit one that seems to have lost its way. Ms. Helfand argues that innovation is now predicated less on creating and more on the undoing of the work of others.

“In this humility-poor environment, the idea of disruption appeals as a kind of subversive provocation,” she writes. “Too many designers think they are innovating when they are merely breaking and entering.”

OxyContin Ring [...]

The doctor began prescribing the opioid painkiller OxyContin – in extraordinary quantities. In a single week in September, she issued orders for 1,500 pills, more than entire pharmacies sold in a month. In October, it was 11,000 pills. By December, she had prescribed more than 73,000, with a street value of nearly $6 million.

At its headquarters in Stamford, Conn., Purdue Pharma, the maker of OxyContin, tracked the surge in prescriptions. A sales manager went to check out the clinic and the company launched an investigation. It eventually concluded that Lake Medical was working with a corrupt pharmacy in Huntington Park to obtain large quantities of OxyContin.

“Shouldn’t the DEA be contacted about this?” the sales manager, Michele Ringler, told company officials in a 2009 email. Later that evening, she added, “I feel very certain this is an organized drug ring…”

Purdue did not shut off the supply of highly addictive OxyContin and did not tell authorities what it knew about Lake Medical until several years later when the clinic was out of business and its leaders indicted. (Source)

Amazon Is Quietly Eliminating List Prices [...]

“We’ve been conditioned to buy only when things are on sale,” said Bonnie Patten, executive director of Truth in Advertising , a consumer information site. “As a result, what many retailers have done is make sure everything is always on sale. Which means nothing is ever on sale.”

Amazon has both benefited from that conditioning as well as encouraged it, which is most likely why it is changing cautiously. It began eliminating list prices about two months ago, pricing specialists say, both on products it sold itself and those sold by other merchants on its site. The retailer did not return multiple requests for comment.

“Our data suggests that list prices are going away,” said Guru Hariharan, chief executive of Boomerang Commerce, a retail analytics firm. Last spring, Boomerang compiled a list for The New York Times of 100 pet food products that Amazon said it was selling at a discount to a list price. Only about half of them still say that.

Short Sims [...]

Educational simulations and serious games have evolved quickly over the last couple of decades, from visionary experiments to predictable tools used to support the leading strategies of organizations as diverse as the US Army and global corporations. The research tells us that sims work, and they can teach some things better than any other approach.

But sims as currently conceptualized are a bit too expensive, time consuming to build, platforms dependent, and hard to update to grow beyond a niche. This has prevented interative content from becoming integral to all educational media including personalized learning and more comprehensive assessment.  (Source)

Sulfation and Detoxification [...]

Even though under normal circumstances dietary inorganic sulfate contributes very little to our sulfate pool, the exogenous administration of small amounts of sulfate in selected forms of delivery may be useful, since contrary to what is still a common belief sulfate can be absorbed form the GI tract [41,51]. Along these lines the possible beneficial effects of inorganic sulfates in drinking water should be evaluated. Certain sulfur containing thermal water baths have been found to be of benefit, probably via transdermal penetration or because of actual drinking of such waters at health spas [21,52-55].

On the other hand it is important to recollect that sulfation is a major pathway for detoxification of pharmacological agents by the liver. Drugs such as acetaminophen, so frequently used in the treatment of pain associated with joint diseases, require large amounts of sulfate for their excretion. Doses of up to 4 g/day are not infrequent. Thirty five % is excreted conjugated with sulfate, 3% conjugated with cysteine [12] and the rest conjugated with glucuronic acid, incidentally a major component of glycosamino glycans (GAG) which are so critical for the integrity of cartilage and other connective tissues. (Source)

Government by Referendum Is Not Democracy [...]

There is a popular view that the highest form of democracy is a referendum. We want to debunk that myth. Democracy is much more than consulting the people in “yes” or “no” decisions.The Brexit referendum, the Vancouver public transit referendum, the electoral reform referendum in B.C., the California tax referendums, the Quebec sovereignty-association referendums all appeared to be the essence of democracy. A closer look tells us that they violated many of its fundamental principles. (Source)

View From Nowhere [...]

It seems that some kind of scientistic fideism is introduced in precautionary culture, a belief (as in trust) in science that is not carried by science itself. Here, the words of Thomas Nagel seem appropriate: ‘… for objectivity is both underrated and overrated, sometimes by the same persons. It is underrated by those who don’t regard it as a method of understanding the world as it is in itself. It is overrated by those who believe it can provide a complete view of the world on its own, replacing the subjective views from which it has developed. These errors are connected: they both stem from an insufficiently robust sense of reality and of its independence of any particular form of human understanding.’ (Source)

Poor Readers Rely On Annotations of Others [...]

We surveyed students enrolled in Introductory Psychology courses about their text marking preferences and analyzed the marking in their textbooks. Low-skill readers report more reliance on highlighting strategies and actually mark their texts more than better readers. In addition, low-skilled readers prefer to buy used, previously marked texts over new ones! Furthermore, when low-skill readers mark their texts, they are less capable of marking the most relevant material as determined by instructors asked to mark sampled text pages for comparison. All of this adds up to a destructive feedback system in which students who are weak readers can make things even worse in terms of course performance due to their text marking preferences and behaviors. (Source)

Cultural Differences and Textbook Reading [...]

Even students who have good general reading skills may lack discipline-specific skills and require help learning how to approach readings in your discipline. They may not recognize the organizational structure of a text and may lack the skills necessary to discern the important ideas, distinguish argument from evidence, or recognize an author’s intended audience, assumptions, or goals. They may read every word of a chapter or article but not know what they are supposed to do with it.
When students lack the skills to identify the relevant aspects of a reading they may accord every sentence equal weight and thus:
take too long with each reading and fall behind
fail to comprehend the reading properly or process it inadequately, thus appearing not to have done it
The issues above can be exacerbated for students from other cultural backgrounds, who may be used to different conventions in writing and argumentation and thus have difficulty recognizing the organizational structure of assigned readings. Second-language issues may also slow them down, making it more difficult to keep up with the reading. (Source)

Test Anxiety Is Often Just Poor Reading [...]

It is normal and healthy to feel some anxiety before an exam. Many students, however, complain about “test anxiety”, explaining that they went into a test knowing the material but that they “went blank” when they began to take the exam. Or when they receive their test results, they find that they made “silly mistakes”. What they think is “too much anxiety” may really point to a gap in their study skills.

Why? When most students prepare for a test, they read their notes or textbooks. As you read along, you may feel that you know (understand) what the author is saying. Understanding what you are reading at the moment does NOT mean that you know it well enough to remember it for a test when the book isn’t there to help you. Thus, students may enter a test situation expecting themselves to “know” the material and finding themselves going “blank” when trying to answer a test item.

To be most efficient, each step of your study should be keyed to the test situation itself. So, you first need to prepare to deal with the COMPONENTS OF THE TEST ENVIRONMENT ; then, understand THE TEXTBOOK STRUCTURE. Once you know these elements, you can apply KEY STRATEGIES FOR STUDYING which can help you be both better prepared and more confident when taking a test. (Source)

Stand and Deliver Associated With 50% Increase in Fail Rate [...]

The President’s Council of Advisors on Science and Technology has called for a 33% increase in the number of science, technology, engineering, and mathematics (STEM) bachelor’s degrees completed per year and recommended adoption of empirically validated teaching practices as critical to achieving that goal. The studies analyzed here document that active learning leads to increases in examination performance that would raise average grades by a half a letter, and that failure rates under traditional lecturing increase by 55% over the rates observed under active learning. The analysis supports theory claiming that calls to increase the number of students receiving STEM degrees could be answered, at least in part, by abandoning traditional lecturing in favor of active learning. (Source)

Reading Tests May Increase Coverage Without Increasing Success [...]

Most interventions designed to increase textbook coverage focus on potentially punitive measures, such as reading quizzes. Though these measures do tend to boost textbook coverage compared to controls, this increased self-reported textbook coverage has not been reliably correlated with academic achievement(McDougall, 1996), and may deter students from taking or remaining in the class. In addition, many of these measures take away valuable class time and increase time faculty spend on grading, making these assessments unfeasible methods of increasing textbook coverage for many teachers. Indeed, one study performed on a community college introductory psychology class population found that students who completed reading focus worksheets AND received specific extensive timely feedback on their assignments performed better on the midterm and final examinations than their counterparts, and were less likely to drop out of the class (Ryan, 2006). Unfortunately, the same reading worksheet returned without extensive feedback did not produce that same high level of academic performance, nor did regular reading quizzes in the same population. Both of these reading compliance measures were, however,associated with a drop in student retention; the quizzed group being most likely to drop before the midterm.

More Data on Lack of Student Reading [...]

Unfortunately, too many college students are not reading the required textbook material for their courses. One survey of physics students found that less than 40% of students in the introductory physics course regularly read the textbook (Podolefsky & Finkelstein, 2006). Psychology students read only 27.46% of the assigned readings before class and only 69.98% before an exam (Clump, Bauer, & Bradley, 2004). In one introductory economics course only 17% of students reported completing all assigned readings (Schnieder, 2001). Two more studies with community college populations found that a shocking one-third to three-fourth of students failed to complete any portion of assigned readings before their psychology and education classes (McDougall & Cordeiro, 1993; McDougall & Cordeiro, 1992), while one survey conducted at two four-year universities found that over 78% of their freshman and sophomore students reported not reading the textbook at all, or reading it only sparingly, for at least one introductory course (Sikorski, et al., 2002). These are disappointing figures, especially given that research indicates that greater academic achievement is associated with reading text material before coming to lecture (Phillips & Phillips, 2007; Terpstra, 1979), and that textbook reading not only enhances content comprehension and retention, but “improves reading comprehension in the discipline overall” (Ryan, 2006, p. 135). (Source)

Knowing, Remembering, and Digital Reading [...]

Kate Garland, a lecturer in psychology at the University of Leicester in England, is one of the few scientists who has studied this question and reviewed the data. She found that when the exact same material is presented in both media, there is no measurable difference in student performance.

However, there are some subtle distinctions that favor print, which may matter in the long run. In one study involving psychology students, the medium did seem to matter. “We bombarded poor psychology students with economics that they didn’t know,” she says. Two differences emerged. First, more repetition was required with computer reading to impart the same information.

Second, the book readers seemed to digest the material more fully. Garland explains that when you recall something, you either “know” it and it just “comes to you” — without necessarily consciously recalling the context in which you learned it — or you “remember” it by cuing yourself about that context and then arriving at the answer. “Knowing” is better because you can recall the important facts faster and seemingly effortlessly.

“What we found was that people on paper started to ‘know’ the material more quickly over the passage of time,” says Garland. “It took longer and [required] more repeated testing to get into that knowing state [with the computer reading, but] eventually the people who did it on the computer caught up with the people who [were reading] on paper.” (Source)

Desirable Difficulties [...]

Vary learning conditions. By always learning under the same conditions, our brains use cues from those conditions to help remember the material. When those cues are gone (i.e., when conditions change), what seemed learned can be forgotten. You need to come at your material in a variety of ways, so that your students learn within a variety of contexts. The same material can be learned through reading at home, listening to a lecture, problem solving in group exercises, doing class presentations, and on and on. Even varying the environmental setting helps: Research has shown that people who study the same material in two different rooms perform better on tests than those who study the material twice in the same room.

Interleave instruction. Studies have shown that students’ long-term retention improves when topics are interleaved, rather than taught in homogeneous blocks. That is, instead of spending a whole class period on one topic before moving on to the next, spend 15 minutes each on three different topics, before returning to the first topic to cycle through again. By forcing your students to change gears often, you may be encouraging them to “reload” memories each time you return to a topic. That extra mental work is exactly the sort of difficulty that encourages better retention.

Space out study sessions. There’s plenty of evidence that last-minute cramming, while often helpful in terms of short-term performance on exams, does not produce good results in terms of long-term retention of material. Students are better off studying throughout the term, returning frequently to material they’ve already studied. As instructors, therefore, we would serve our students well to move away from having a single exam-prep session at the end of term in favor of repeated (shorter) review sessions spread throughout the semester. (Source)

Ecuadorean Textbook Fail, 1970s Edition [...]

How not to design a textbook — base it on methods the teachers don’t understand.

First a set of “objectives” was decided upon. Then new textbooks
were designed. The content and layout of each book had been specifically constructed to correspond to “modern pedagogy”. In constructing these books in this way it was believed that such a pedagogy was definable; that such a pedagogy was correct; and that it could be transplanted from the U.S. to Ecuador and within Ecuador successfully. This was despite the fact that few Ecuadorian teachers had previously studied the concepts of new math of the “whole word method” of teaching reading, and despite the fact that these new methods were significant breaks with past local experience (Lynch, 1974). (Source)

A Different Distribution for Digital Readers [...]

The n of this difference in pattern is low and likely cannot be trusted. But it is worth looking for elsewhere.

The total sample size comprised 231 students, 119 digital tablet and 112 paper readers. The 10 multiple-choice items were scored 10–0 (high to low), while the two short-answer items were coded for comprehension (4–0, high to low). To determine group differences, t-tests compared scores between paper and tablet readers. Results did not show a statistically significant difference in group means between paper and tablet readers for either the multiple-choice or short-answer items.

Nevertheless, an examination of the range and frequencies of score distributions indicated an emerging pattern: Compared to tablet readers, paper readers had greater frequencies of higher scores for both multiple-choice recall and short answers that measured comprehension (tables 1 and 2; figures 2 and 3). When combining the top two scores for comprehension, paper readers showed a higher percentage. Although there is a greater frequency of score 4 with tablets, this corresponds with a higher frequency and percentage of the mean score, 2. Despite no difference between group means, there may be a difference in individual scores. In particular environments or for specific test purposes such as military selection and ranking, this might indicate a significant factor. (Source)

Readers of Digital Screens More Likely to be Better Readers [...]

A 2013 UK survey conducted by the National Literacy Trust with 34,910 students ranging in age from 8 to 16 reported that over 52 percent preferred to read on electronic devices compared to 32 percent who preferred print.10 The data points to possible influences of technology on reading ability: compared to print readers, those who read digital screens are almost twice less likely to be above-average readers. Furthermore, the number of children reading from e-books doubled in the prior two years to 12 percent. According to John Douglas, the National Literacy Trust Director, those who read only on-screen are also three times less likely to enjoy reading. Those who read using technological devices said they really enjoyed reading less (12 percent) compared to those who preferred books (51 percent). (Source)

Equivalent in Performance but not Time [...]

David Daniel and William Woody urge caution in rushing to e-textbooks and call for further investigation.7 Their study compared college student performance between electronic and paper textbooks. While the results suggested that student scores were similar between the formats, they noted that reading time was significantly higher in the electronic version. In addition, students revealed significantly higher multitasking behaviors with electronic devices in home conditions. These findings uphold recent results involving multitasking habits while using e-textbooks in Baron’s survey.8 Likewise, L. D. Rosen et al. found that during a 15-minute study period, students switched tasks, on average, three times while using electronic devices.9 Taken together, these studies point to adaptive habits and cognitive shortcuts while using technology even though learning is the primary objective. (Source)

Reading Digital Differently [...]

Researchers have noticed changes in reading behavior as readers adopt new habits while interfacing with digital devices.4 For example, findings by Ziming Liu claimed that digital screen readers engaged in greater use of shortcuts such as browsing for keywords and selectivity.5 Moreover, they were more likely to read a document only once and expend less time with in-depth reading. Such habits raise concern about the implications for academic learning.

According to Naomi Baron, university students sampled in the United States, Germany, and Japan said that if cost were the same, about 90 percent prefer hard copy or print for schoolwork.6 For a long text, 92 percent would choose hard copy. Baron also asserts that digital reading makes it easier for students to become distracted and multitask. Of the American and Japanese subjects sampled by Baron, 92 percent reported they found it easiest to concentrate when reading in hard copy (98 percent in Germany). Of the American students, 26 percent said they were likely to multitask while reading in print, compared with 85 percent reading on-screen. (Source)

Meaning Is Found in the Head of the Reader [...]

Skilled readers actively engage the text while those who
are less skilled are passive readers. Although both
skilled and marginally-skilled readers are proficient
in reading the text aloud — this is a simple task — they
differ in their comprehension of text because of the way
they approach reading…Meaning can only be found in
the head of the reader. Thus, readers bring meaning
to the spoken or written word by applying their prior
knowledge to it. Unskilled readers get stuck at the
surface level, struggling with individual words, trying
to decode letters and sounds, while skilled readers
go to the deep structure and find meaning between
and beyond the lines of text. (Maleki & Heerman,
1996, p. 2) (Source)

Seventy Percent of Students Do Not Read the Text Before Class [...]

A consistent pattern of research findings has established
compliance with course reading at 20-30% for any given
day and assignment (Burchfield & Sappington, 2000;
Hobson, 2003; Marshall, 1974; Self, 1987). Faculty face
the stark and depressing challenge of facilitating learning
when over 70% of the students will not have read assigned
course readings.

Surveys show that students see a weak relationship
between course reading and academic success. Student
perception and linked behavior collected in the National
Survey of Student Engagement (2001) for example,
underscores the extent to which students relegate course
reading to the margins of necessary activity; most college
students reported that they do not read course assignments.
These results are substantiated by studies that do not
rely on self-report. Burchfield and Sappington (2000)
found, “On average, about a third of the students will
have completed their text assignment on any given day” (p.
59), a compliance rate that has been stable for 30 years
(Marshall, 1974; Self, 1987; McDougall & Cordiero, 1993;
Hobson, 2003).

Course structure and faculty preconceptions about students
affect reading compliance. Course-based characteristics
that reduce the likelihood that students will comply with
reading include: no justification in the course syllabus for
reading selections (Grunert, 1997), little to no differentiation
between reading that is actually required to succeed in the
course and reading material labeled “required” (Hobson,
2003), and a mismatch between course text literacy levels
and students’ reading abilities (Bean, 1996; Leamnson, 1999). (Source)