Larry MacPhee: e-Learning

widgets


FLAGSTAFF WEATHER

AZ Time:


8.23.2024 AI for dummies

I just heard an ad for C3.ai, one of the many generative AI companies that are springing up in recent years. They toss out a bunch of terminology that may sound impressive but most people have no idea what they are talking about. So here's a very basic primer on AI jargon and what it means. LLM Agnostic: LLM stands for Large Language Model. For AI to sound human, it has to use language the way people do. By dumping lots and lots of content samples into a system that the software can peruse, they are simulating the way people talk. The agnostic part means that the AI in question can use more than one LLM, so that, for example, it can generate convincing jargon associated with biomedical, or legal, or whatever specialized field is needed. IP Free: IP refers to Intellectual Property; specifically the copyrighted original work belonging to other people. What the AI people are saying is that the content that the AI generates is so churned up from various sources that it doesn't appear to violate any one individual's work. I say "appear" because these systems don't actually think or generate novel content. They just rehash from the available source material. But to catch them at it is nearly impossible. Hallucination Free: this is kind of a funny one. You may have heard that while the AI can in many cases do a pretty good job, it occasionally gets things very wrong. A great example is the fingers problem. When the AI is asked to draw people, it often gets the number of fingers wrong. Sometimes background faces in a group photo look like something from Edvard Munch's "The Scream". That's the hallucination. In other cases, faces may look too perfect, with no blemishes, freckles, wrinkles, or features that are out of perfect proportion. These are amalgams of many faces, averaging out the minor individual deviations from the norm. In any of these cases, there's a good chance the image is AI generated. That said, it is getting harder to tell AI from real. Take this quiz to see for yourself.

Here are a few great examples where the AI got it wrong:

This group photo of female F-35 pilots looks good at first, but zoom in on the ladies in the back row on the right. It gets worse...

If you're familiar with the bible story of Jesus flipping over the sellers' tables in the temple, I don't think this is what was being implied. One more, just for fun...

Thus the majestic salmon complete their long journey, swimming back to their natal stream to spawn.

10.24.2023 The COVID backlash - out with the baby and the bathwater!

When COVID hit higher education, back in early 2020, a lot of higher-ed upper-level administrators, many of whom didn't have much experience with fully-online education, were asked to quickly come to a decision about how to handle this challenge. Unfortunately, the decision makers didn't take the advice of the people who were experienced in onine education. This is not a criticism of just my own institution, but rather a broad overview of academia's response. I think it's worth exploring why they didn't take the advice of the experts, and looking at some of the major downstream consequences that resulted.

Our advice was to move most courses to from in-person to online asynchronous, relying primarily on the Learning Management System (LMS). One reason was the concern about inadequate bandwidth for some of our students, many of whom live in rural Arizona and on the Navajo and Hopi reservations where Internet connectivity is always slow and often spotty. Moving to asynchronous assignments would have reduced the time stress on students who may have been dealing with non-academic challenges such as caring for sick relatives or recovering from illness themselves. The LMS functions well at lower bandwidth, and when students are not all hitting the system at the same moment in time. We recommended the use of tools like Zoom and Collaborate only for lower-stakes activities such as office hours, and promoted our streaming media system, Kaltura, for recording short video lectures that could be buffered and viewed at lower bandwidth.

The second reason is that remote synchronous instruction (mediated by tools like Zoom and Collaborate) is a high-risk strategy because, if the technology fails, there's no safety net. So called "Zoom school" is what many institutions settled on. In the early days of COVID, those failures happened a lot. Zoom bombers disrupted classes, students dropped the call frequently, mics were often muted while people were trying to talk, or unmuted when they were supposed to be quiet, resulting in a lot of awkward and embarrassing background noise. For those who didn't use a false background, the goings-on behind the camera were also sometimes very distracting. Some of these problems can be chalked up to the inexperience of the instructors and support staff with regard to this new mode of instruction. Eventually better security measures prevented uninvited guests from disrupting classes, instructors got better at muting and un-muting participants at the right moment, and at simultaneously monitoring the chat while lecturing. We also started recording lectures for those who dropped out, but they were seldom viewed, and we have the analytics to prove it. Without a doubt, engagement suffered. Many students turned off their cameras and more just stopped attending. Why? Again, the lack of preparation for this style of instruction was a factor. In a lecture hall, the captive audience has little choice but to sit there and listen. Although breaking up the content into small chunks, and embedding regular engagement activities is a good practice for both in-person lecture and Zoom instruction, it's less conspicuous to tune out with Zoom, and not to get called on it.

Some institutions, ours included, went even further, and adopted a variant of HyFlex instruction. Without the staff and technical resources to support it properly, which few schools invested, this meant simultaneously teaching to an in-person and online audience without a lot of help. For most of our instructors, who had no experience teaching remotely, this was the most difficult thing that could have been assigned to them, and the quality of teaching slipped dramatically. It was especially painful for the remote students, who often got forgotten. It's easy to accuse the students and say they should have stuck with it and been more participatory, but the reality is that they were also unprepared for this rapid adjustment.

So why didn't administrators at more schools choose to shift from live, in-person to online asynchronous, as the experts in online teaching and learning recommended? I think one answer is that these administrators falsely believed that asynchronous instruction was "not as good" as synchronous, and feared student attrition. Or, for those who did have confidence in asynchronous, they still feared that the parents and students felt that way, and wouldn't be willing to pay for it. They wanted to change things as little as possible, and made the false assumption that remote synchronous lecture via Zoom would be the smallest change they could make.

Unfortunately, almost everyone looks back on Zoom school and Hyflex as disasters and, in many ways, they were. But one has to wonder about the path not taken. Yes, more change up front would have been necessary, but I truly believe that had we chosen the bolder path at the beginning, we might not now be throwing out all of online education, baby and bathwater. The big winners from COVID seem to have been the for-profit online schools like GCU and SNHU, who stuck to what they were already doing well.

If we don't learn from our mistakes, we are bound to repeat them. One of the other take-aways from allowing employees to work remotely when it's possible, and monitoring them based on performance metrics rather than time in seat, is that employee satisfaction was higher and productivity didn't suffer appreciably. So why are we forcing employees to come back to the office if they can do the job from anywhere? While I think a lot of people want things to "return to normal" and, by normal they mean pre-COVID, the world has changed, and I think we need to embrace that.

10.22.2023 ChatGPT and the AI Panic

In academia and beyond, ChapGPT has been creating quite a firestorm. Although contract cheating (where you pay someone else to do the work for you) has been around awhile, getting the computer to write a passable essay is new. What has got academia in a twist is that writing essays used to be something that made cheating more difficult, and it was always considered the gold standard on ensuring that students "know how to think." Faculty who were disparaging of multiple choice tests have, until recently, felt secure in the belief that requiring students to write essays allowed them to uphold academic integrity. Although this may have always been a questionable assumption, it is now a more dubious claim then ever.

I wanted to see for myself just how good the software is, so I picked a topic I know a fair bit about, set up an account in just a few seconds, and asked ChatGPT to write me an essay on the subject. I asked ChatGPT to write an essay on the topic of Natural Selection (Biology). The essay was generated on the fly within about 10 seconds and, had a student submitted this work to me, with proper references, I would have shed tears of joy and assigned an A+. Even more amazing is that if I click the Regenerate button, it writes a new essay on the same topic that is just as good!

One of the criticisms I've heard about this tool includes the claim that while the AI makes a convincing argument, it does not do a good job of sticking to the facts. In my test, that was not the case. The information was accurate and written in the everyday language that I would expect a good college student to be able to produce. So, in other words, I would have absolutely no clue that this essay was machine generated.

Good news/Bad News. The good news is that an entire cottage industry of AI detectors has sprung up to respond to this new challenge. The bad news, however, is that the tools for detecting AI generated content are really in the same boat as we are. Between false positives (Looks like the student used AI but he didn't.) and false negatives (Looks like the student didn't use AI but he did.), the tools are essentially worthless. OpenAI's own tool for detecting AI generated text gets it right 26% of the time, but also incorrectly identifies student generated text as AI-written 9% of the time. It would seem that Frankenstein has lost control of his monster.

What to do? I expect that while the detection tools may get better, so will the AI's ability to evade detection. There's probably no winning this arms race. Some will pretend that nothing has changed and that everything's fine. Others will hit the panic button and come up with extreme solutions. One option would be to ban all technology and have students write their essays in controlled labs or in proctored online settings, where access to these tools is restricted. Another approach would be to lean into this new technology and have students fact-check and provide references for the AI generated content. But there's a middle of the road alternative that may be more reasonable, although it's going to require some effort. An instructor can do one-on-one interviews with students after they submit their essays, live or via Zoom, and ask them questions about it. It won't take long to learn whether they know their stuff or not.

09.28.2022 What do floppy disks and women's colleges have in common?

What do floppy disks and women's colleges have in common? Your first response might be that both are obsolete, but no, I don't think that's right. There was an article on NPR the other day about the "Floppy Disk King." Tom Persky describes himself as the "last man standing" in the floppy disk marketplace. While sales of floppy disks have been declining for years, there is a niche market for older equipment that still uses this reliable but obsolete technology. Think about voting machines or scientific/medical instruments for example, that serve a very specific role, are expensive to replace, and work just fine except for the fact that the world has moved on. As Tom's competitors exited the marketplace, he found himself in a niche market that, for him, was actually growing. Demand may be low, but it's solid and, without competitors, he's got a monopoly. I heard another article about a womans' college that was yielding to economic pressures and opening up enrollment to male students when it occured to me that it's a similar problem. Many colleges, not just womens' colleges, are seeing declining enrollment, and the solution for administrators is to bring in more students by whatever means necessary. For administrators of womens' colleges, the low hanging fruit may appear to be admitting men. However, the unique value proposition of a womens' college is ruined by that change. Many of the faculty, prospective and current students, and the alumni of that institution are not happy about the change, and this may result in unanticipated attrition of female students, loss of good faculty, and loss of alumni dollars, which may erase any gains made by admitting men. If a university president of a womens' college was to take the opposite approach, to hang on, advertize that uniqueness, appeal for support from alumni to preserve the tradition of single sex education for those who want it, and to scoop up the students from other womens' colleges that are folding or going co-ed, this may be a better long term strategy for success than becoming just another generic university in an already crowded, competitive market.

08.16.2022 Picking NAU's next LMS

Recently, NAU decided to begin the search for our next Learning Management System. We are currently using the venerable Blackboard Learn for most of our online courses, Moodle for our Personalized Learning programs, and LifterLMS for enrollment in our Continuing Education programs. Getting to a single, modern LMS is something I've been advocating for several years, and it's finally underway. This is a large and ongoing effort involving a lot of people, both technical and academic. We first had to identify some potential candidates, and pretty quickly settled on Blackboard Learn's successor, Ultra, D2L BrightSpace, and Instructure's Canvas as the three alternatives. Our sister institutions, Arizona State University in Phoenix, and the University of Arizona in Tucson each went their own way, with ASU on Canvas and UA on Brightspace. Our local Coconino Community College, CCC, and the Maricopa community college system are also on Canvas. The for-profit, fully online University of Phoenix was an early adopter of Blackboard Ultra. We met with representatives from each company, viewed their demos, built a criteria list, offered faculty sandboxes to explore, and then put it to a campus-wide vote. In my personal opinion, though it is one that all of the Instructional Designers concurred with, there were two excellent options; Brightspace and Canvas, and one less good one, Blackboard Ultra. I studied all three systems in depth, and developed a page that contrasted the most important features of all three. Of the three, Brightspace is my personal favorite because it has lots of customization options, it is highly intuitive, it is visually the most attractive, and the support/sales people were really great. Interestingly, the vote came back very heavily in favor of Canvas, with Blackboard Ultra a distant second and Brightspace in third place. You can view the results here. Why? Faculty and students outnumbered other voting groups and both of these groups have heard, by word of mouth from peers at other institutions, that everyone else is using Canvas. There is some truth to that, and I think that was enough to convince most people. Another faction, while unhappy with Blackboard Learn, believed that moving to Blackboard Ultra would be less work than going to a different product, though from our research that opinion appears to be incorrect. While I'm surprised that Brightspace didn't win more people over, most really didn't look at the choices in detail and it was the least well known. Don't get me wrong. Canvas will be a fine LMS for NAU, and will be much better than what we have now. But it is far from perfect, and D2L is, in my opinion, putting the most energy into building the best LMS money can buy. Blackboard Ultra is headed in the right direction, but it still feels buggy and unfinished after years of effort and, at the rate that Blackboard is losing market share, I worry about the financial stability of the company. So we will be moving to Canvas next Summer, with the help of K-16 Solutions as a migration consultant. After over a decade on Blackboard, it will be interesting times ahead.

06.22.2021 Life in the time of COVID

Oh, hi, you're still here! Well this has been quite a strange interlude, hasn't it?

When COVID-19 hit the U.S. in March of 2020, my office got busy helping to develop what became known as the NAUFlex (a variation on Brian Beatty's HyFlex) model of instruction. We have, for many years, supported the use of online tools for face-to-face classes, a model we call "web-enhanced." We've done the same for fully online asynchronous instruction using Blackboard and its predecessors (and Moodle for our Personalized Learning program) dating back to the early 1990s. However, live synchronous online instruction was something we had mostly avoided for reasons I'll get into below, but demand grew rapidly, and we had no choice but to dive in. We immediately expanded support for teaching with conferencing tools like Zoom and Collaborate, with Kaltura for streaming recorded media, and with collaborative productivity tools like the Google Apps for Education and (ugh) Microsoft Teams. We were able to convince a few people that uploading PowerPoints and 90-minute lectures was not the best pedagogy. One of us even dropped a best selling book at an opportune time!

Attendance at our webinars increased by an order of magnitude (be careful what you wish for!), and we received lots of positive feedback about our support. This has been an incredibly difficult year, where our personal and professional lives have been compressed into one, with kids and pets and spouses all locked in with us, but a year we’ve come through about as well as we could by being patient, accommodating, and by working together to address the many unexpected challenges our faculty and students have been facing.

Teaching live online is a hard thing to do well; it's trapeze work without a net. If anything goes wrong in the technology layer, everything comes crashing down. And pedagogical things that are simple in the classroom (everyone break up into groups of five) are clunky at best with our conferencing tools. We endured bad lighting, questionable attire, mouth breathers, curious views into people's homes, silly animated backgrounds when people barely had enough bandwidth for audio, we've suffered Zoom bombing by racist idiots, people who thought they were on mute but weren't, and people talking while muted, which is the new ALL-CAPS! We've had difficulty engaging students, who we have to coax to turn on their cameras and turn in their work. We have radically changed the way we taught this year, and all without much preparation. At times, it seemed a lot like this:

teaching in 2020

As vaccines are now available, allowing people to gradually return to pre-COVID ways of doing things, it will be interesting to see which aspects of this tech-infused approach to teaching we each decide to keep, and which will go back on the shelf. I’m sure that for many, who were thrust online reluctantly and without adequate time to prepare, Zoom fatigue and the lack of social interaction with colleagues will result in an eager return to old practices. Others may have found that the necessity created by the pandemic inspired a few new and innovative practices worth holding on to. We proved that working online from home is effective, and that the flexibility it afforded had great value.

We reduced the use of gasoline so much that the price went negative and the skies over polluted cities cleared, revealing views of distant mountains many people had not seen in their lifetimes. We made the realization that connection with our students and colleagues was still possible in the online environment, and we have stretched ourselves to learn new tools and teaching methodologies. We have made sacrifices to stay safe and well, but I feel grateful to all who have made difficult adjustments to keep the wheels turning. I hope we will continue to use some of the new tricks we’ve learned, and I also hope to see you all in person again soon.

04.15.2019: The early history of Adaptive Courseware

Back in the beginning of what would become my career in educational technology, when I was a student teacher at Serrano Middle School in 1992, I had the rare opportunity to observe a teaching technology that would still be, or is again, considered cutting edge in 2019. The PLATO learning system, developed at the University of Illinois (the same computing powerhouse that brought us Mosaic, the first web browser) was being piloted in California and New York at a handful of public schools. It was an early version of what, today, we call adaptive courseware. PLATO was running on a genuine IBM server; a PC tower with a big toggle power switch near the top, that was networked to a group of thin client terminals set up in pods of four on round tables at the back of my mentor teacher's classroom. When I arrived, it was in use as kind of a drill and practice tool that occupied about a quarter of the students in the classroom at any given time. It was one of my duties to power up the system each day before the students arrived.

Over the course of my time there, I got a chance to observe the system and the way the students used it. Each student logged into the system individually, and was led through a series of multiple choice problems, often with a relevant graphic like the one above as a prompt. The questions were generally appropriate, relevant, and well worded. They were not the same for each student, and what I eventually gleaned was that they were presented dynamically, based on the students' answers to previous questions. If a student got a question right, the system would move on to the next topic or offer up a more challenging question in the same area, according to some algorithm. If the student got a question wrong, it would offer up an easier question, or more questions of a similar type. When the students completed a set number of questions, which was about 20 if I recall correctly, they were done for the day and the machine dismissed them back to the classroom and waited patiently for the next student. The classroom teacher had no part in this process, and was neither there to explain, nor encourage, nor guide the students. One thing my mentor teacher commented on with puzzlement was that some of his students, who were doing quite well in class overall, were doing so poorly on the computer that they were in danger of failing. I was asked to observe these students and figure out what was going on. What I found was that they were not reading the questions at all, nor attempting to answer them correctly. Often they would not even be looking at the screen. They would just hit the return key repeatedly to jump to the next question until they reached the required question count, and then rotated back into the regular classroom activities. Clearly, they were so unmotivated by this system that they were just trying to get through it as quickly as possible, and even the fear of a failing grade was not enough to make them try harder. Middle school students might be a particularly tough crowd.

What I learned that day, and what continues to be true today, despite higher resolution graphics and better animations, is that most students need an inspiring instructor to guide them, to praise them, and to motivate them to care about, and challenge them to understand, the material they are learning. This might seem astonishing to a computer programmer, who logically thought that students were there to learn and, if presented with clearly worded questions at the appropriate comprehension level, would rapidly progress to new levels of understanding. Evidently not. Most students just aren't motivated enough to learn entirely on their own. If they were, we wouldn't need teachers and schools, but only textbooks and libraries. When the computer praises them with a "good job" it feels empty. When it pushes them harder, they just lose interest. This might help to explain why MOOCs, a more recent attempt at competency based, self-paced learning, have also failed. The completion rate of MOOCs, according to a recent and ongoing study, is somewhere between 5 and 15 percent.

Can this type of learning work? I think we can say one thing for sure. It's not only the quality of the content, but the motivation and self-discipline of the students that determines whether independent learning can work. Maybe graduate students would make a better audience than middle schoolers. But even there, I think people respond better to an inspiring instructor than to a rich piece of content. A short video might pique your interest but to stick with it requires some recognition that you're working hard to achieve something, and some regular injections of real enthusiasm help! I'm just not sure that's something the computer can provide. Good news for teachers and professors everywhere. You've got some job security for now.

 

04.04.2019: AI or IA?

Artificial Intelligence is much in the news these days. On campus, we have six wheeled autonomous robots delivering food to the dorms. At an educational technology conference I recently attended, the inflated expectations around the potential of AI and "machine learning" were alarming; particularly so because I found myself in the small minority of skeptics. As I see it, there are several problems with AI. The first one is the biggest: it doesn't exist. What people are calling AI is just good programming. If a programmer can anticipate most of the common scenarios, it can appear as though the robot or program is intelligent. But it's not. My Nest thermostat has some clever programming. It lowers the temperature of the house when, using motion sensors, it determines that there's nobody home. Some people call this machine learning, but that's a stretch. This behavior was programmed by a person. The only thing that the thermostat "learns," if you can call it that, is our general pattern of occupancy. While that is cool and useful, true AI, as I see it, must be able to adapt to novel situations, to anticipate based on prior experience, and to avoid past mistakes. We are so far away from true AI that it's a disservice to use the term. Autonomous vehicles are a great example of programming based on anticpated problems. Equipped with GPS, an accurate digital map, and a collection of good sensors, a computer program can drive a car better than a distracted or drunk driver under most conditions, and maybe even better than an attentive novice driver when certain predictable conditions, like a slippery roadway, arise. But where it becomes clear that an AI is not as adaptable as a human being is underscored in the two recent tragic incidents with the Boeing 737 Max 8. If accounts are true, the pilots understood the problem perfectly, but were unable to wrestle control back from a misguided computer that, due to a faulty sensor, kept nosing the plane down to avoid what it "thought" was a stall, resulting in the deaths of almost 350 people. The best evidence that there was no intelligence in this AI is that both planes hit the ground at upwards of 600 mph, against the express wishes of the pilots. Some might argue that machine error is less common than pilot error, but that's a tough sell, and the machines are going to make different, dumber mistakes than the pilots would, at least in some situations. I hope we have learned from this example that an AI can't be trusted entirely, needs an easy to reach off switch, and that its judgment, especially when human lives are at stake, should be based on the input from more than one sensor. In fact, decisions should be based on the inputs from a minimum of three sensors, so that the majority rules when there's a disagreement. When someone steps in front of a food delivery robot, it freezes like a deer in the headlights. We can't afford to fly planes that way.

Rather than AI, I think what we need is IA. We need tools that allow us to visualize things we couldn't see in the raw data, to steady our hand, and to do things with more finesse than we could do unaided. What we need is Intelligence Augmentation, or IA, where there is a synergy between the real intelligence and the machine helper. The software that keeps a drone stable in a fixed position is far faster and more deft than a human pilot could be, but the AI will keep the drone in that hold position until it runs out of battery and crashes. In fact, a modern airplane relies on a variety of computers to assist in the flying of the plane. In the incident where Sully safely landed a passenger jet on the Hudson river after multiple bird strikes took out both engines, he deserves a ton of credit, but he brought the plane down on a glide path guided by the computer. I would rather live in a future where machines enhance or augment what we can use our intelligence and creativity and good judgment to do, rather than one where the machines lock us out and make the decisions on our behalf. When I first set up our Nest thermostat, I had miswired it so that the heat, once called for, never shut off. I woke up in the middle of the night to a house that had been heated to over 90 degrees and rising, and joked that the AI was trying to cook me. I can think of nothing more disturbing than being in the cockpit of an airplane, losing the battle with a machine intent on crashing the plane out of a misguided sense that it is being helpful. I have no interest in machines taking away our good jobs or our freedom of choice. I have never found Microsoft's Clippy to be particularly helpful at assisting me with making a numbered list. I believe in human potential, and technology that helps us to be more than we are; not less than we are.

02.12.2019: Disruptive Technology

The pace of technological change is accelerating, and many of these changes are disruptive to the kinds of jobs we do, and the kinds of services we enjoy. Consider how each of the following professions or industries has been disrupted:

Old Industry Disruptor
Typesetting Desktop Publishing
Secretaries Word Processors
Bank Tellers ATM Machines
Music Industry Napster-->iTunes-->Spotify
Hotels AirBnB/VRBO
Travel Agents Expedia, etc.
Encyclopedias Wikipedia
News Media Social Media
Classified Ads Craigslist
Big Box Stores (Sears) Amazon
Yellow Pages Google
Taxis Uber/Lyft
Movie Rentals (Blockbuster) Netflix
Land Line Phones Cell phones-->Smartphones
Gas Powered Cars Electrics/Self Driving Cars
Maps GPS/Voice Navigation
Fossil Fuels Wind/Solar
Assembly Lines Robots
Medicine Genomics (PCR, CRISPR)
Academia Online Courses

Did I miss any big ones? e-mail me your additions and I'll update my list. Where I work, in higher education, change has been slower, and less transformative so far, but I think we're just at the beginning, and I think there are a number of critical factors that will drive this change and disrupt academia too. The first is cost. An undergraduate degree has never cost more, but at least in the past you could be assured of a long and prosperous career if you had one. But in today's economy, employers tell us that our graduates arrive unprepared for the kind of work they'll be doing. Many of them will never earn the kind of incomes necessary to pay off those giant student loan debts, and most cannot even count on a job that provides medical benefits when they get sick. Under this kind of pressure, something has got to give. Maybe it's already started? Enrollments are down 9% since 2011 and a number of elite schools have shut their doors or engaged in mergers. These trends are even more worrisome in parts of the country that are losing population, such as New England and the midwest. It could be that the value proposition of an undergrad degree is fading, and that students are choosing to forego the debt load that goes along with them. We'll be watching this trend closely in the coming years, and trying to find ways to stay relevant and affordable. Personalized Learning, Competency Based Education programs, and online courses in general are promising possibilities, but the quality and relevency of these programs will have to increase, and there will be more schools on the losing end rather than the winning end. The keys will be ease of use (convenience), quality and breadth of offerings, and price, and it will be difficult to win without strong differentiation from the rest of the market.

11.07.2017: Latest LMS Marketshare Numbers

LMS MarketShare 2017

LMS Marketshare Latest Numbers

Blackboard has a big problem. Neither the absorption of major players like WebCT and Angel has helped them grow, nor has the dubious patenting of the LMS and the threat of lawsuits against competitors scared people away from the alternatives. Nobody is moving to Blackboard. They are several years into a complete LMS overhaul, and the migration path from Blackboard Learn to Ultra is anything but clear. It would be foolish for anyone to migrate to Learn at this point, because they will need to migrate again shortly, and it would be risky to commit to Ultra while it remains so unfinished, when there are solid alternatives like D2L Brightspace and Canvas to choose from. Blackboard desperately needs a win, and I suspect they made a screaming deal with the University of Phoenix, which has also struggled in recent years. The best Blackboard can hope for is that there won't be new defections, that current customers will move to Ultra when the time comes, and that Ultra will be good enough to attract new customers. That's a tall order.

11.01.2017: Business Plan for a Quality Online Program

10.25.2017 No Significant Difference

In a meta-analysis of a dozen recent studies, a non-profit education research organization has concluded that there is no significant difference in student learning outcomes when comparing performance in face-to-face courses with hybrid and fully-online courses. Mode of delivery matters less than the quality of the individual course and its instructor. No surprise to me! This finding backs earlier work. http://nosignificantdifference.org

10.25.2017 LMS Switchers

If you're changing LMS this year, you're likely going to Canvas

Although the title of the e-Literate article says that Blackboard may be turning things around, the graph doesn't show it. If you're a school in the U.S. or Canada and you're changing LMS, odds are high that you're going to Canvas (61%) or D2L Brightspace (38%). Note that this graph does not represent current market share, but where the switchers are going. Still, it's a strong predictor of future market share, and it shows that Blackboard's in trouble. Blackboard's current goal seems to be to hold onto existing customers and stop the bleeding until their next-gen LMS, Ultra, is ready for prime time. That date has been slipping for some time however, and I'm pretty certain that if I had to switch LMSes today, I'd go to Canvas. I've been a fan since 2009. But the decision to switch is not an easy one. Changing LMSes is like moving to a new home in a new city in a different country. It's not to be undertaken lightly, because the move is difficult, time consuming, costly, and painful, even when the end result is a better system. So why isn't anyone moving to Blackboard? The answer is simple. You'd have to migrate twice. Once to the current system, and then to the new one. That would be nuts.

09.22.2017 Western Governors in Trouble with Feds

In case you missed it, Western Governors University has been audited by the Federal Department of Education and found not to be distinctly different from a correspondence school. If the finding is upheld, WGU could be forced to return $713 million in federal financial aid. This has serious implications for all schools involved in distance learning and personalized learning! But the Feds are right about one thing. An online course can and should be much more than a correspondence class.

10.13.2017 Update: "WGU is not off the hook."

11.02.2016 The way we assess students makes no sense.

Traditional testing forces students to cram, regurgitate, and forget.

Have you ever thought about why we test students the way we do? What do I mean? Well, we generally test students in isolation from each other. We generally disallow aids like notes, calculators, textbooks, cellphones. We ban the use of Google and Wikipedia. We set strict time limits and restrict you to your seat. We use a lot of multiple choice and fill-in-the-blank, with perhaps a smattering of short essay. Someone is watching you constantly. Now, you may be thinking, "Of course. How else can we keep them from cheating? How else can we find out what they know? How else can we keep them from helping each other?" I would argue that those are the wrong questions. Sugata Mitra has an interesting TED talk, where he develops the idea that the present day education system remains much as it was designed by the British empire in the 18th century. At that time, what was needed were clerks and bookkeepers who could do math in their heads, and read and write without need of help, primarily to keep track of goods moved around the world in sailing ships. He argues convincingly that the education system isn't broken. It works remarkably well. It's just that it trains students to do things that there is little need for in the information age. Rather than testing for the ability to memorize and regurgitate without understanding, we need to redesign assessment around collaboration, persistence, synthesis, and creativity.

When we attempt to solve a problem at home or at work, what are the first things we do? Gather some background information. Consult an expert. Get some help. Brainstorm. Try more than one approach. Keep at it. None of these methods are allowed during a test, but this is the way we solve problems in the real world. Sure, we need to have the vocabulary. When I go to the hardware store, I need to be able to explain the problem so they can recommend the right tool. Yes, I need some basic understanding as a foundation. But why, in the 21st century, when all of the knowledge of humanity is a few clicks away, must I regurgitate memorized facts on an exam without any help? How often would I not have access to these resources in the real, everyday world? Perhaps if I'm lost in the woods, and my cell phone is out of juice, then I would need to solve a problem in isolation and without assistance. But that seems more like the exception than the rule. Cramming for a test, regurgitating a collection of memorized facts, and forgetting it all the next day is like being a bulimic. There is little educational value in consuming information that you can't retain, just as there is little nutritional value in eating food you don't keep down.

Most problems we face in the real world don't occur in an isolation chamber. They don't have someone hovering over you with a stopwatch. They don't require that all of the knowledge required to solve the problem is already in your head. They don't require you to stay seated, or to work alone. They don't present you with five distinct choices, only one of which is correct. They don't allow you only one attempt. That would be crazy. And yet, that's exactly how we test students, from elementary school all the way through college. Think about these questions for a bit. What kinds of students are successful at that kind of testing? How well does that reflect their future performance on the job? What skills do employers regularly ask for? When hiring someone, is it more important that they already know how to do the job, or that they are creative, persistent, able to learn, and able to work well with others? How well do we prepare students for the challeges they will face?

What are the skills we need to employ in modern day problem solving? Usually, they involve gaining an understanding of the problem, either by doing research or getting help from someone who knows more about the topic. Once we understand the problem, we develop one or more strategies to solve it, based on cost, time, effort, available resources. Often, the first solution is inelegant, but it might be good enough. "Fail small, fail often." is advice I've heard from many successful problem solvers. Don't be afraid to try things. Break the problem into pieces and solve each part separately. Creative solutions rarely come from aiming directly at the problem and going full speed ahead. But the key point here is that we learn to be creative by attacking problems not with a head full of facts, but a kit full of tools that can be used again and again. You may be thinking that I've got a point, but it's easier to grade answers right or wrong when we test facts, not opinions. However, it's actually not so hard to grade students a better way. You look at how they tackled the problem. It's the difference between awarding points only for the answer versus asking students to show their work and evaluating both the quality of the end product and the sophistication of their methods. Let them work in teams. Let them use any resources they can get their hands on. This is an approach to teaching and learning that actually prepares students for a job in the real world.

"But wait," you're saying. "If I assign group work, how can I tell who did what?" Yes, that can be tricky. We've all been assigned to a team where one person does almost nothing, and gets the same amount of credit as those who pulled most of the weight. That's a problem with the way the group members were evaluated. But guess who knows who did which parts of the job, and how well they did them? The members of the group. A very clever way to grade students is to have them evaluate their own performance and that of their fellow group members by secret ballot. Average out the peer grades and compare it to the grade they gave themselves. You'd be surprised how accurately this will match your own observations, and how well it reveals who did the work. Of course, you also assign the work an overall grade, so that if everyone agrees to give themselves higher grades than they deserve, there is a correction factor. This method may need to be employed more than once before students realize that their actions are accountable, so don't give up after just one try. You will find that it becomes even more effective as time goes on.

There is another thing you can try when assigning group work, if you're still having challenges. Identify the different kinds of work necessary to put together the final project. For example, in a lab experiment, one person is the group manager, whose job is to lead, organize, plan, make decisions and settle disputes. Another is the experimenter, the hands-on person, who must be good at understanding and following instructions. A third is the data collector, who might also be in charge of creating graphs and charts. A fourth is the analyst and writer of the report. A fifth is the presenter. These are somewhat arbitrary divisions of responsibility, but you get the idea. When you assign duties within the group, people sort themselves into the kind of work they like to do. Students who hate to get up in front of others and talk might be excellent writers. Students who like to present might not want to get their hands dirty, or be good at following detailed instructions. That's ok. Everybody can make a contribution. And, if someone really wants to work alone, let them. As long as they understand they have to do the same amount of work as a whole group would, that's fine. That's how the world works.

09.21.2016: The Classroom of the 21st Century

09.06.2016: College is for Everyone?!

08.16.2016: Adaptive Courseware: The Good, the Bad, and the Ugly

11.08.2015: The Coming Disruption of Academia

09.08.2015: 21st Century Learning

02.28.2015: Why College is Like a Gym Membership

11.27.2014: Pick Any Two

06.24.2013: The Technology Adoption Curve

06.17.2013: Time to change my password? Oh, just shoot me now!

03.27.2013: Technology is the answer. What was the question?

03.06.2013: After bad stuff happens :-(

02.28.2013: Latest Reports from the LMS Battleground

10.10.2012 The wave of change that's about to hit higher education

04.03.2012 Blackboard Embraces Open Source...like a Boa constrictor

04.01.2012 What Google and Facebook have in common

03.25.2012 Message to the eContent providers

03.20.2012 Textbooks of the Near Future.

01.06.2012 Are we putting the cart before the horse?

01.03.2012 Unintended Consequences.

10.17.2011 Quality Matters?

09.29.2011 The "do-over" mentality in undergraduate education

12.12.2010 Why going "TSA" on web classes just won't work

06.14.2010 What Google should do next

06.11.2010 Why NAU's Mobile Computing Policy needs rethinking

05.05.2010 College is for Everyone, so Attendance is Mandatory!

04.20.2010 LMS Decisions

04.12.2010 The Hacker-Hipster Manifesto

02.19.2010 What is up with Google lately?

01.22.2010 Preparing for Snow Days, Epidemics, and Other Disasters

01.04.2010 Clickers: Treating the symptoms or the disease?

12.20.2009 Spreading the FUD

10.14.2009 NAU adopts MS Exchange; increase in productivity negligible

10.02.2009 How to get attention in Academia

10.01.2009 Universal Design

09.30.2009 Should NAU site license the MacOS as well as Windows?

09.01.2009 Marketshare change among LMSes over time

05.26.2009 Mac Growth in Higher Ed

05.21.2009 Microsoft on the move?

04.15.2009 Free and Open Source Software in the Enterprise

Why Computing Monopolies are Bad

How fast is your network connection?

Data Visualization

Mossberg puts his finger on it, and his foot in it.

Why can't Microsoft get it right?

The truth about telecommuting

Blackboard's Scholar

Learning Spaces

Podcasting with iTunesU

Gaming on Campus