Larry MacPhee: e-Learning

widgets

Click for Flagstaff, Arizona Forecast

AZ Time:


04.15.2019: The early history of Adaptive Courseware

Back in the beginning of what would become my career in educational technology, when I was a student teacher at Serrano Middle School in 1992, I had the rare opportunity to observe a teaching technology that would still be, or is again, considered cutting edge in 2019. The PLATO learning system, developed at the University of Illinois (the same computing powerhouse that brought us Mosaic, the first web browser) was being piloted in California and New York at a handful of public schools. It was an early version of what, today, we call adaptive courseware. PLATO was running on a genuine IBM server; a PC tower with a big toggle power switch near the top, that was networked to a group of thin client terminals set up in pods of four on round tables at the back of my mentor teacher's classroom. When I arrived, it was in use as kind of a drill and practice tool that occupied about a quarter of the students in the classroom at any given time. It was one of my duties to power up the system each day before the students arrived.

Over the course of my time there, I got a chance to observe the system and the way the students used it. Each student logged into the system individually, and was led through a series of multiple choice problems, often with a relevant graphic like the one above as a prompt. The questions were generally appropriate, relevant, and well worded. They were not the same for each student, and what I eventually gleaned was that they were presented dynamically, based on the students' answers to previous questions. If a student got a question right, the system would move on to the next topic or offer up a more challenging question in the same area, according to some algorithm. If the student got a question wrong, it would offer up an easier question, or more questions of a similar type. When the students completed a set number of questions, which was about 20 if I recall correctly, they were done for the day and the machine dismissed them back to the classroom and waited patiently for the next student. The classroom teacher had no part in this process, and was neither there to explain, nor encourage, nor guide the students. One thing my mentor teacher commented on with puzzlement was that some of his students, who were doing quite well in class overall, were doing so poorly on the computer that they were in danger of failing. I was asked to observe these students and figure out what was going on. What I found was that they were not reading the questions at all, nor attempting to answer them correctly. Often they would not even be looking at the screen. They would just hit the return key repeatedly to jump to the next question until they reached the required question count, and then rotated back into the regular classroom activities. Clearly, they were so unmotivated by this system that they were just trying to get through it as quickly as possible, and even the fear of a failing grade was not enough to make them try harder. Middle school students might be a particularly tough crowd.

What I learned that day, and what continues to be true today, despite higher resolution graphics and better animations, is that most students need an inspiring instructor to guide them, to praise them, and to motivate them to care about, and challenge them to understand, the material they are learning. This might seem astonishing to a computer programmer, who logically thought that students were there to learn and, if presented with clearly worded questions at the appropriate comprehension level, would rapidly progress to new levels of understanding. Evidently not. Most students just aren't motivated enough to learn entirely on their own. If they were, we wouldn't need teachers and schools, but only textbooks and libraries. When the computer praises them with a "good job" it feels empty. When it pushes them harder, they just lose interest. This might help to explain why MOOCs, a more recent attempt at competency based, self-paced learning, have also failed. The completion rate of MOOCs, according to a recent and ongoing study, is somewhere between 5 and 15 percent.

Can this type of learning work? I think we can say one thing for sure. It's not only the quality of the content, but the motivation and self-discipline of the students that determines whether independent learning can work. Maybe graduate students would make a better audience than middle schoolers. But even there, I think people respond better to an inspiring instructor than to a rich piece of content. A short video might pique your interest but to stick with it requires some recognition that you're working hard to achieve something, and some regular injections of real enthusiasm help! I'm just not sure that's something the computer can provide. Good news for teachers and professors everywhere. You've got some job security for now.

04.04.2019: AI or IA?

Artificial Intelligence is much in the news these days. On campus, we have six wheeled autonomous robots delivering food to the dorms. At an educational technology conference I recently attended, the inflated expectations around the potential of AI and "machine learning" were alarming; particularly so because I found myself in the small minority of skeptics. As I see it, there are several problems with AI. The first one is the biggest: it doesn't exist. What people are calling AI is just good programming. If a programmer can anticipate most of the common scenarios, it can appear as though the robot or program is intelligent. But it's not. My Nest thermostat has some clever programming. It lowers the temperature of the house when, using motion sensors, it determines that there's nobody home. Some people call this machine learning, but that's a stretch. This behavior was programmed by a person. The only thing that the thermostat "learns," if you can call it that, is our general pattern of occupancy. While that is cool and useful, true AI, as I see it, must be able to adapt to novel situations, to anticipate based on prior experience, and to avoid past mistakes. We are so far away from true AI that it's a disservice to use the term. Autonomous vehicles are a great example of programming based on anticpated problems. Equipped with GPS, an accurate digital map, and a collection of good sensors, a computer program can drive a car better than a distracted or drunk driver under most conditions, and maybe even better than an attentive novice driver when certain predictable conditions, like a slippery roadway, arise. But where it becomes clear that an AI is not as adaptable as a human being is underscored in the two recent tragic incidents with the Boeing 737 Max 8. If accounts are true, the pilots understood the problem perfectly, but were unable to wrestle control back from a misguided computer that, due to a faulty sensor, kept nosing the plane down to avoid what it "thought" was a stall, resulting in the deaths of almost 350 people. The best evidence that there was no intelligence in this AI is that both planes hit the ground at upwards of 600 mph, against the express wishes of the pilots. Some might argue that machine error is less common than pilot error, but that's a tough sell, and the machines are going to make different, dumber mistakes than the pilots would, at least in some situations. I hope we have learned from this example that an AI can't be trusted entirely, needs an easy to reach off switch, and that its judgment, especially when human lives are at stake, should be based on the input from more than one sensor. In fact, decisions should be based on the inputs from a minimum of three sensors, so that the majority rules when there's a disagreement. When someone steps in front of a food delivery robot, it freezes like a deer in the headlights. We can't afford to fly planes that way.

Rather than AI, I think what we need is IA. We need tools that allow us to visualize things we couldn't see in the raw data, to steady our hand, and to do things with more finesse than we could do unaided. What we need is Intelligence Augmentation, or IA, where there is a synergy between the real intelligence and the machine helper. The software that keeps a drone stable in a fixed position is far faster and more deft than a human pilot could be, but the AI will keep the drone in that hold position until it runs out of battery and crashes. In fact, a modern airplane relies on a variety of computers to assist in the flying of the plane. In the incident where Sully safely landed a passenger jet on the Hudson river after multiple bird strikes took out both engines, he deserves a ton of credit, but he brought the plane down on a glide path guided by the computer. I would rather live in a future where machines enhance or augment what we can use our intelligence and creativity and good judgment to do, rather than one where the machines lock us out and make the decisions on our behalf. When I first set up our Nest thermostat, I had miswired it so that the heat, once called for, never shut off. I woke up in the middle of the night to a house that had been heated to over 90 degrees and rising, and joked that the AI was trying to cook me. I can think of nothing more disturbing than being in the cockpit of an airplane, losing the battle with a machine intent on crashing the plane out of a misguided sense that it is being helpful. I have no interest in machines taking away our good jobs or our freedom of choice. I have never found Microsoft's Clippy to be particularly helpful at assisting me with making a numbered list. I believe in human potential, and technology that helps us to be more than we are; not less than we are.

02.12.2019: Disruptive Technology

The pace of technological change is accelerating, and many of these changes are disruptive to the kinds of jobs we do, and the kinds of services we enjoy. Consider how each of the following professions or industries has been disrupted:

Old Industry Disruptor
Typesetting Desktop Publishing
Secretaries Word Processors
Bank Tellers ATM Machines
Music Industry Napster-->iTunes-->Spotify
Hotels AirBnB/VRBO
Travel Agents Expedia, etc.
Encyclopedias Wikipedia
News Media Social Media
Classified Ads Craigslist
Big Box Stores (Sears) Amazon
Yellow Pages Google
Taxis Uber/Lyft
Movie Rentals (Blockbuster) Netflix
Land Line Phones Cell phones-->Smartphones
Gas Powered Cars Electrics/Self Driving Cars
Maps GPS/Voice Navigation
Fossil Fuels Wind/Solar
Assembly Lines Robots
Medicine Genomics (PCR, CRISPR)
Academia Online Courses

Did I miss any big ones? e-mail me your additions and I'll update my list. Where I work, in higher education, change has been slower, and less transformative so far, but I think we're just at the beginning, and I think there are a number of critical factors that will drive this change and disrupt academia too. The first is cost. An undergraduate degree has never cost more, but at least in the past you could be assured of a long and prosperous career if you had one. But in today's economy, employers tell us that our graduates arrive unprepared for the kind of work they'll be doing. Many of them will never earn the kind of incomes necessary to pay off those giant student loan debts, and most cannot even count on a job that provides medical benefits when they get sick. Under this kind of pressure, something has got to give. Maybe it's already started? Enrollments are down 9% since 2011 and a number of elite schools have shut their doors or engaged in mergers. These trends are even more worrisome in parts of the country that are losing population, such as New England and the midwest. It could be that the value proposition of an undergrad degree is fading, and that students are choosing to forego the debt load that goes along with them. We'll be watching this trend closely in the coming years, and trying to find ways to stay relevant and affordable. Personalized Learning, Competency Based Education programs, and online courses in general are promising possibilities, but the quality and relevency of these programs will have to increase, and there will be more schools on the losing end rather than the winning end. The keys will be ease of use (convenience), quality and breadth of offerings, and price, and it will be difficult to win without strong differentiation from the rest of the market.

11.07.2017: Latest LMS Marketshare Numbers

LMS MarketShare 2017

LMS Marketshare Latest Numbers

Blackboard has a big problem. Neither the absorption of major players like WebCT and Angel has helped them grow, nor has the dubious patenting of the LMS and the threat of lawsuits against competitors scared people away from the alternatives. Nobody is moving to Blackboard. They are several years into a complete LMS overhaul, and the migration path from Blackboard Learn to Ultra is anything but clear. It would be foolish for anyone to migrate to Learn at this point, because they will need to migrate again shortly, and it would be risky to commit to Ultra while it remains so unfinished, when there are solid alternatives like D2L Brightspace and Canvas to choose from. Blackboard desperately needs a win, and I suspect they made a screaming deal with the University of Phoenix, which has also struggled in recent years. The best Blackboard can hope for is that there won't be new defections, that current customers will move to Ultra when the time comes, and that Ultra will be good enough to attract new customers. That's a tall order.

11.01.2017: Business Plan for a Quality Online Program

10.25.2017 No Significant Difference

In a meta-analysis of a dozen recent studies, a non-profit education research organization has concluded that there is no significant difference in student learning outcomes when comparing performance in face-to-face courses with hybrid and fully-online courses. Mode of delivery matters less than the quality of the individual course and its instructor. No surprise to me! This finding backs earlier work. http://nosignificantdifference.org

10.25.2017 LMS Switchers

If you're changing LMS this year, you're likely going to Canvas

Although the title of the e-Literate article says that Blackboard may be turning things around, the graph doesn't show it. If you're a school in the U.S. or Canada and you're changing LMS, odds are high that you're going to Canvas (61%) or D2L Brightspace (38%). Note that this graph does not represent current market share, but where the switchers are going. Still, it's a strong predictor of future market share, and it shows that Blackboard's in trouble. Blackboard's current goal seems to be to hold onto existing customers and stop the bleeding until their next-gen LMS, Ultra, is ready for prime time. That date has been slipping for some time however, and I'm pretty certain that if I had to switch LMSes today, I'd go to Canvas. I've been a fan since 2009. But the decision to switch is not an easy one. Changing LMSes is like moving to a new home in a new city in a different country. It's not to be undertaken lightly, because the move is difficult, time consuming, costly, and painful, even when the end result is a better system. So why isn't anyone moving to Blackboard? The answer is simple. You'd have to migrate twice. Once to the current system, and then to the new one. That would be nuts.

09.22.2017 Western Governors in Trouble with Feds

In case you missed it, Western Governors University has been audited by the Federal Department of Education and found not to be distinctly different from a correspondence school. If the finding is upheld, WGU could be forced to return $713 million in federal financial aid. This has serious implications for all schools involved in distance learning and personalized learning! But the Feds are right about one thing. An online course can and should be much more than a correspondence class.

10.13.2017 Update: "WGU is not off the hook."

11.02.2016 The way we assess students makes no sense.

Traditional testing forces students to cram, regurgitate, and forget.

Have you ever thought about why we test students the way we do? What do I mean? Well, we generally test students in isolation from each other. We generally disallow aids like notes, calculators, textbooks, cellphones. We ban the use of Google and Wikipedia. We set strict time limits and restrict you to your seat. We use a lot of multiple choice and fill-in-the-blank, with perhaps a smattering of short essay. Someone is watching you constantly. Now, you may be thinking, "Of course. How else can we keep them from cheating? How else can we find out what they know? How else can we keep them from helping each other?" I would argue that those are the wrong questions. Sugata Mitra has an interesting TED talk, where he develops the idea that the present day education system remains much as it was designed by the British empire in the 18th century. At that time, what was needed were clerks and bookkeepers who could do math in their heads, and read and write without need of help, primarily to keep track of goods moved around the world in sailing ships. He argues convincingly that the education system isn't broken. It works remarkably well. It's just that it trains students to do things that there is little need for in the information age. Rather than testing for the ability to memorize and regurgitate without understanding, we need to redesign assessment around collaboration, persistence, synthesis, and creativity.

When we attempt to solve a problem at home or at work, what are the first things we do? Gather some background information. Consult an expert. Get some help. Brainstorm. Try more than one approach. Keep at it. None of these methods are allowed during a test, but this is the way we solve problems in the real world. Sure, we need to have the vocabulary. When I go to the hardware store, I need to be able to explain the problem so they can recommend the right tool. Yes, I need some basic understanding as a foundation. But why, in the 21st century, when all of the knowledge of humanity is a few clicks away, must I regurgitate memorized facts on an exam without any help? How often would I not have access to these resources in the real, everyday world? Perhaps if I'm lost in the woods, and my cell phone is out of juice, then I would need to solve a problem in isolation and without assistance. But that seems more like the exception than the rule. Cramming for a test, regurgitating a collection of memorized facts, and forgetting it all the next day is like being a bulimic. There is little educational value in consuming information that you can't retain, just as there is little nutritional value in eating food you don't keep down.

Most problems we face in the real world don't occur in an isolation chamber. They don't have someone hovering over you with a stopwatch. They don't require that all of the knowledge required to solve the problem is already in your head. They don't require you to stay seated, or to work alone. They don't present you with five distinct choices, only one of which is correct. They don't allow you only one attempt. That would be crazy. And yet, that's exactly how we test students, from elementary school all the way through college. Think about these questions for a bit. What kinds of students are successful at that kind of testing? How well does that reflect their future performance on the job? What skills do employers regularly ask for? When hiring someone, is it more important that they already know how to do the job, or that they are creative, persistent, able to learn, and able to work well with others? How well do we prepare students for the challeges they will face?

What are the skills we need to employ in modern day problem solving? Usually, they involve gaining an understanding of the problem, either by doing research or getting help from someone who knows more about the topic. Once we understand the problem, we develop one or more strategies to solve it, based on cost, time, effort, available resources. Often, the first solution is inelegant, but it might be good enough. "Fail small, fail often." is advice I've heard from many successful problem solvers. Don't be afraid to try things. Break the problem into pieces and solve each part separately. Creative solutions rarely come from aiming directly at the problem and going full speed ahead. But the key point here is that we learn to be creative by attacking problems not with a head full of facts, but a kit full of tools that can be used again and again. You may be thinking that I've got a point, but it's easier to grade answers right or wrong when we test facts, not opinions. However, it's actually not so hard to grade students a better way. You look at how they tackled the problem. It's the difference between awarding points only for the answer versus asking students to show their work and evaluating both the quality of the end product and the sophistication of their methods. Let them work in teams. Let them use any resources they can get their hands on. This is an approach to teaching and learning that actually prepares students for a job in the real world.

"But wait," you're saying. "If I assign group work, how can I tell who did what?" Yes, that can be tricky. We've all been assigned to a team where one person does almost nothing, and gets the same amount of credit as those who pulled most of the weight. That's a problem with the way the group members were evaluated. But guess who knows who did which parts of the job, and how well they did them? The members of the group. A very clever way to grade students is to have them evaluate their own performance and that of their fellow group members by secret ballot. Average out the peer grades and compare it to the grade they gave themselves. You'd be surprised how accurately this will match your own observations, and how well it reveals who did the work. Of course, you also assign the work an overall grade, so that if everyone agrees to give themselves higher grades than they deserve, there is a correction factor. This method may need to be employed more than once before students realize that their actions are accountable, so don't give up after just one try. You will find that it becomes even more effective as time goes on.

There is another thing you can try when assigning group work, if you're still having challenges. Identify the different kinds of work necessary to put together the final project. For example, in a lab experiment, one person is the group manager, whose job is to lead, organize, plan, make decisions and settle disputes. Another is the experimenter, the hands-on person, who must be good at understanding and following instructions. A third is the data collector, who might also be in charge of creating graphs and charts. A fourth is the analyst and writer of the report. A fifth is the presenter. These are somewhat arbitrary divisions of responsibility, but you get the idea. When you assign duties within the group, people sort themselves into the kind of work they like to do. Students who hate to get up in front of others and talk might be excellent writers. Students who like to present might not want to get their hands dirty, or be good at following detailed instructions. That's ok. Everybody can make a contribution. And, if someone really wants to work alone, let them. As long as they understand they have to do the same amount of work as a whole group would, that's fine. That's how the world works.

09.21.2016: The Classroom of the 21st Century

09.06.2016: College is for Everyone?!

08.16.2016: Adaptive Courseware: The Good, the Bad, and the Ugly

11.08.2015: The Coming Disruption of Academia

09.08.2015: 21st Century Learning

02.28.2015: Why College is Like a Gym Membership

11.27.2014: Pick Any Two

06.24.2013: The Technology Adoption Curve

06.17.2013: Time to change my password? Oh, just shoot me now!

03.27.2013: Technology is the answer. What was the question?

03.06.2013: After bad stuff happens :-(

02.28.2013: Latest Reports from the LMS Battleground

10.10.2012 The wave of change that's about to hit higher education

04.03.2012 Blackboard Embraces Open Source...like a Boa constrictor

04.01.2012 What Google and Facebook have in common

03.25.2012 Message to the eContent providers

03.20.2012 Textbooks of the Near Future.

01.06.2012 Are we putting the cart before the horse?

01.03.2012 Unintended Consequences.

10.17.2011 Quality Matters?

09.29.2011 The "do-over" mentality in undergraduate education

12.12.2010 Why going "TSA" on web classes just won't work

06.14.2010 What Google should do next

06.11.2010 Why NAU's Mobile Computing Policy needs rethinking

05.05.2010 College is for Everyone, so Attendance is Mandatory!

04.20.2010 LMS Decisions

04.12.2010 The Hacker-Hipster Manifesto

02.19.2010 What is up with Google lately?

01.22.2010 Preparing for Snow Days, Epidemics, and Other Disasters

01.04.2010 Clickers: Treating the symptoms or the disease?

12.20.2009 Spreading the FUD

10.14.2009 NAU adopts MS Exchange; increase in productivity negligible

10.02.2009 How to get attention in Academia

10.01.2009 Universal Design

09.30.2009 Should NAU site license the MacOS as well as Windows?

09.01.2009 Marketshare change among LMSes over time

05.26.2009 Mac Growth in Higher Ed

05.21.2009 Microsoft on the move?

04.15.2009 Free and Open Source Software in the Enterprise

Why Computing Monopolies are Bad

How fast is your network connection?

Data Visualization

Mossberg puts his finger on it, and his foot in it.

Why can't Microsoft get it right?

The truth about telecommuting

Blackboard's Scholar

Learning Spaces

Podcasting with iTunesU

Gaming on Campus