Ratings may reward colleges for selectivity

Colleges should be rewarding for educating students, not for selecting only the best, said Andrew P. Kelly, who directs the American Enterprise Institute’s Center on Higher Education Reform, at hearings on the president’s proposed college ratings system.

Unfortunately, our ability to measure the “value-added” by a college program is almost nonexistent, and the measures that the Department of Education has proposed are woefully insufficient as an approximation of that quantity.

It is much easier for colleges to change the students that they enroll than it is to change the quality of education that they provide.

If the ratings system does not account for this, it will likely set up a scenario in which selective colleges are provided with even more resources, while open-access institutions work to become more selective in an effort to improve their outcomes

Federal ratings should not be linked to federal student aid, argued Kelly. Instead, the ratings should be designed to help prospective students evaluate different programs at different colleges.

The Education Department plans to use the percentage of students receiving a Pell Grant as a measure of access. The measure should be linked to Pell graduates, said Kelly.  

Outcomes measures will be based on flawed graduation data, said Kelly. “We need some validation that the diplomas colleges award are worth something,” such as whether graduates earn enough to pay off their loans.  In addition, those developing PIRS should include “rigorous pre- and post- measures of success, or at least identify relevant control groups to compare results.”

Smaller, more selective schools could raise their access ratings  and lower their net price easily by admitting more low-income students, Kelly said. That would help a small number of students.

Large, less selective schools with low rates of student success have a tougher choice. “They can embark on the hard, uncertain work of improving teaching and learning to boost student success. Or they can take the easier route and admit fewer low-income students.”

All of this is to say that if improvement is quicker and easier for low access/high success schools than it is for high access/low success schools, then rewards will accrue to the former. That will simply reinforce their place atop the higher education system and, frankly, waste taxpayer dollars on schools that don’t need them.

Selectivity is the key to U.S. News’ prestigious “best colleges” rankings, Kelly wrote in an earlier Forbes column. “Those measures often have everything to do with who colleges admit and less to do with what colleges actually teach them while they’re there.”

What have grads learned? What can they do?

It’s not enough to push more students to a college degree, writes Richard M. Freeland, commissioner of higher education in Massachusetts, in the Boston Globe. We need a way to evaluate how much students have learned.

Without a common set of criteria by which to gauge the quality of student work, we can’t improve our programs, enhance curricular design, or effectively prepare students for future employment and civic engagement,” he writes.

As part of Massachusetts’ Vision Project, public colleges and universities have created a statewide framework to assess student learning outcomes.

This pilot effort — launched at seven community college, state university, and UMass campuses last year — assessed broad dimensions of liberal arts learning. Hundreds of student papers, lab reports, and other samples of written work were collected from a wide range of courses across many disciplines. Several dozen faculty scorers then used rubrics, or standards, developed by the American Association of Colleges and Universities to assess student work in three areas: written communication, critical thinking, and quantitative literacy.

With training, faculty members reached “a high degree of consensus on the quality of student work,” Freeland writes. “Many faculty discovered that their assignments would need to be redesigned if their students were to be able to demonstrate the competencies spelled out in the rubrics.”

Eight states — Connecticut, Rhode Island, Indiana, Kentucky, Minnesota, Missouri, Oregon, and Utah — have joined Massachusetts in an attempt “to produce cross-state comparisons of student learning outcomes, he writes.

Answering the question, “Is college worth it?” isn’t just a matter of calculating college costs and graduates’ earnings, concludes Freeland. What have they learned? What can they do?

What do students learn in college?

Massachusetts is leading a nine-state effort to measure what students learn in college, writes Marcella Bombardieri in the Boston Globe.

The plan is to compare students’ work, including term papers and lab reports, rather than using a standardized test.

“There is tremendous interest in this nationally, because everybody in higher education knows, if this doesn’t work, the next answer is a standardized test probably imposed by the federal government or by states,” Commissioner Richard M. Freeland said at a state Board of Higher Education meeting . . .

The Association of American Colleges and Universities is overseeing the project, which recently received $1 million in funding from the Bill & Melinda Gates Foundation.

Some professors are worried that campuses or instructors may be punished for poor results when they are doing their best to help students who arrived on campus underprepared, Paul F. Toner, president of the Massachusetts Teachers Association and a higher education board member, told the Globe“I think there’s just a concern that they’re going to be held accountable for things beyond their control,” he said.

Before reaching out to other states, Massachusetts conducted a pilot project last spring. Seven campuses — including several community colleges, Framingham and Salem state universities, and the University of Massachusetts Lowell — gathered about 350 samples of assignments students who were nearing graduation had completed for classes.

Then a group of 22 professors spent three days over spring break at Framingham State evaluating the work for what it showed about each student’s abilities in written communication, quantitative literacy, or critical thinking, said Bonnie Orcutt, director of learning outcomes assessment for the Department of Higher Education.

Massachusetts is working with Connecticut, Indiana, Kentucky, Missouri, Minnesota, Oregon, Rhode Island and Utah to expand the experiment. 

MOOCs are hot, but do students learn?

MOOCs (massive open online courses) are red hot in higher education, reports Time. A third of college administrators think residential campuses will become obsolete. State legislators are pushing for-credit MOOCs to cut college costs. But, how much are MOOC students learning?

“At this point, there’s just no way to really know whether they’re effective or not,” said Shanna Jaggars, assistant director of the Community College Research Center at Columbia University’s Teachers College, which has produced some of the most recent scholarship about online education.

Enrollment in online college courses of all kinds increased by 29 percent from 2010 through 2012, according to the Babson Survey Research Group. However, completion rates are low. Only about 10 percent of people who sign up for a MOOC complete the course.

Advocates say that’s because there are no admissions requirements and the courses are free; they compare it to borrowing a book from the library and browsing it casually or returning it unread.

In addition, completers don’t earn college credits. In a survey by Qualtrics and Instructure, two-thirds of MOOC students said they’d be more likely to complete a MOOC if they could get college credit or a certificate of completion.  That still not widely available, notes Time.

Until it is, said Jaggars, it will be hard to measure the effectiveness of MOOCs—a Catch-22, since without knowing their effectiveness, it’s unlikely colleges will give academic credit for them.

To study what happens when students get credit for online courses, Teachers College looked at online courses at community colleges in Virginia and Washington State that were not MOOCs—since tuition was charged and credit given—but were like them in other ways. The results were not encouraging. Thirty-two percent of the students in online courses in Virginia quit before finishing, compared with 19 percent of classmates in conventional classrooms. The equivalent numbers in Washington State were 18 percent versus 10 percent. Online students were also less likely to get at least a C, less likely to return for the subsequent semester, and ultimately less likely to graduate.

San Jose State’s experiment with for-credit MOOCs was suspended in response to very low pass rates.  Pass rates improved significantly in the summer semester, but “a closer look showed that more than half of the summer students already had at least a bachelor’s degree, compared to none of the students who took online courses in the spring.” Even then, more summer registrants dropped out than in traditional classes.

“In general, students don’t do as well in online courses as they do in conventional courses,” said Jaggars. “A lot of that has to do with the engagement. There’s just less of it in online courses.”

Despite all this, 77 percent of academic leaders think online education is as good as face-to-face classes or better, Babson found. Four in 10 said their schools plan to offer MOOCs within three years, according to a survey by the IT company Enterasys.

In a new Gallup poll, 13 percent said employers see an online degree as better than a traditional degree, while 49 percent said the online degree has less value for employers. Online education gives students more options and provides good value for the money, but is less rigorous, most respondents said.

‘Value’ plan leaves out learning

President Obama proposes rating colleges and universities on access, graduation rates, graduate earnings and affordability, writes Richard Hersh in an essay on Inside Higher EdWhat about learning?

Myriad studies over the past several decades document that too little “higher” learning is taking place; college students do not make significant gains in critical thinking, problem solving, analytical reasoning, written communication skills, and ethical and moral development.

Institutions respond to rewards, Hersh writes. Linking federal student aid to easily measured goals “will steer colleges and universities further away from higher learning.”

Hersh is co-author of We’re Losing Our Minds: Rethinking American Higher Education.

Colleges announce new performance metrics

Colleges and universities should be judged by student progression and completion, employment outcomes, repayment and default rates on student loans, institutional cost per degree and student learning, concludes a report by HCM Strategists for the Voluntary Institutional Metrics Project.

Eighteen institutions — community colleges, online institutions, for-profit and non-profit colleges and one research university — have worked for more than two years to develop the performance measures with funding from the Gates Foundation.

Many in higher education believe “if colleges don’t figure out how to measure the quality and value of their product, lawmakers will do it for them, writes Paul Fain on Inside Higher Ed.

Participating colleges had hoped to release institutional “dashboards” based on the new metrics, but there were too many problems measuring employment and learning outcomes.

Many data-driven efforts are aimed at students and their families, notes Fain. VIMP is designed for legislators. “Policy makers often seek data on too many variables, resulting in data overload and lack of focus,” the report said.

The new performance measures try to take into account different colleges’ circumstances. Colleges that serve many low-income students won’t have the same graduation or loan repayment rates as elite colleges that enroll predominantly well-off and well-prepared students. VIMP rates each institution against its predicted performance range.

To measure efficiency, the dashboards include a cost-per-degree metric. Unlike other data sets, this one included operating costs but stripped out capital expenses, which can cloud the picture of what colleges spend to educate students.

College completion measures include part-time as well as full-time students and account for transfers, total credits attempted and time to a credential. However, collecting all that information is burdensome, the report admits.

(The employment measure) connects higher education data with unemployment insurance information, analyzing wages and employment status one and five years after graduation. Whether students were attending graduate school after completing is also factored in.

However, only a few states and colleges currently connect those sources of data, according to the report. And there is no standardized approach to for reporting employment outcomes.

Measuring student learning proven to be most difficult challenge. The project tried “to develop metrics for both core skills and major-specific — or upper-division course equivalent — learning,” but couldn’t find appropriate tests to do so.

The 18 participants include the community college systems of Indiana, Kentucky and Louisiana and community colleges in Texas, Maryland, Arizona and Kansas.

It’s the learning, stupid

The community college Completion Agenda aims to double the number of students who complete a one-year certificate or an associate degree or who transfer to complete a credential, writes Terry O’Banion in Community College Times. College leaders have focused on orientation, advising, placement, financial aid — everything but teaching and learning.

Key leaders involved in the Completion Agenda recognize the need to focus more attention on teaching and learning and classroom instruction. Jamie Merisotis, president of Lumina Foundation has noted: “Oddly enough, the concept of learning—a subject that seems critical to every discussion about higher education—is often overlooked in the modern era. For us, learning doesn’t just matter. It matters most of all. It’s the learning, stupid.”

. . . Kay McClenney and her colleagues at the Center for Community College Student Engagement (CCCSE) also weigh in on this conversation: “Student success matters. College completion matters. And teaching and learning—the heart of student success—matter.”

When students are “actively engaged,” they’re more likely to learn, persist and reach their goals, according to CCCSE research.

Improving classroom success in the first year is critical, especially for low-income students, says Vincent Tinto.

Shugart: Completion is an ‘ecosystem’ issue

Improving completion requires understanding the higher education “ecosystem,” writes Sanford C. Shugart, president of Aspen Prize-winning Valencia College in Florida.

Community colleges “are being asked to achieve much better results with fewer resources to engage a needier student population in an atmosphere of serious skepticism where all journalism is yellow and our larger society no longer exempts our institutions (nor us) from the deep distrust that has grown toward all institutions,” writes Shugart in Inside Higher Ed.

His principles for moving the needle on student completion start with a caution: “Be careful what and how you are measuring — it is sure to be misused.”

. . . Consider a student who comes to a community college, enrolls full-­time, and after a year of successful study is encouraged to transfer to another college. This student is considered a non­completer at the community college and isn’t considered in the measure of the receiving institution at all.

. . .  Is there any good reason to exclude part-­time students from the measures? How about early transfers? Should non-­degree-seeking students be in the measure? When is a student considered to be degree‐seeking? How are the measures, inevitably used to compare institutions with very different missions, calibrated to those missions? How can transfer be included in the assessment and reporting when students swirl among so many institutions, many of which don’t share student unit record information easily?

Completion rates should be calculated for different groups depending on where they start — college ready? low remedial? — so students can calculate their own odds and colleges can design interventions, Shugart recommends. College outcomes measures should be based on college-ready students and should reflect the value added during the college years.

Students experience higher education as an “ecosystem,” Shugart writes. Few community college students get all their education at one institution.

They swirl in and among, stop out, start back, change majors, change departments, change colleges. . . . Articulation of credit will have to give way to carefully designed pathways that deepen student learning and accelerate their progression to completion.

Students need to know that completion matters, writes Shugart. Florida has “the country’s strongest 2+2 system of higher education” with common course numbering,  “statewide articulation agreements that work” and a history of successful transfers. Yet community college students are told to transfer when they’re “ready,” regardless of whether they’ve completed an associate degree.

Students at Valencia,  Seminole State, Brevard and Lake Sumter are offered a new model,  “Direct Connect,” which guarantees University of Central Florida admission to all associate degree graduates in the region. “It is something they can count on, plan for, and commit to. Earn the degree and you are in.”

Learning is what matters, Shugart adds. Increasing completion rates improves the local economy and community only if students learn “deeply and effectively in a systematic program of study, with a clearer sense of purpose in their studies and their lives.”

He suggests: designing degree pathways across institutional boundaries, encouraging students to “make earlier, more grounded choices of major,” requiring an associate degree to transfer and providing transfer guarantees. In addition, Shugart calls for research on higher education ecosystems and new metrics for measuring performance.

Carnegie eyes replacing Carnegie unit

The Carnegie Unit, which measures learning based on time in class rather than actual learning, may be on the way out. The Carnegie Foundation for the Advancement of Teaching, which developed the measure in 1906, will study ways to measure competency using a $460,000 Hewlett Foundation grant.

. . . the unit is a gauge of the amount of time a student has studied a subject. For example, a total of 120 hours in one subject, meeting four or five times a week for 40 to 60 minutes, for 36 to 40 weeks each year earns the student one “unit” of high school credit.

The Carnegie Unit was developed to push for higher standards, not to measure learning, says researcher Elena Silva.  “It is not a good universal measure for student progress. … We are curious to know how it might be changed and more aligned with better, richer tools for measurement.”

It’s about time to rethink the credit hour, writes Matt Reed, a community college administrator.

It’s now normal for degree programs to specify student learning outcomes, and to be able to measure them. That’s huge.

Online education has thrown the whole concept of “seat time” into question, too. Since most online instruction is asynchronous anyway, it’s becoming harder to say with a straight face that learning has to happen in 75 minute chunks.

Now, MOOCs are starting to raise issues about the notion of “credit” itself, even independent of the “hour” part.

. . . At the same time, the federal financial aid programs are actually getting more persnickety about the most backward-looking elements of the credit hour, in response mostly to abuses in the for-profit sector.

Financial aid and faculty contracts are based on credit hours, at least in part, Reed writes. Figuring out an alternative will require a lot of work. So let’s get started.

It’s the learning, stupid

How did Valencia College in Orlando, Florida win the Aspen Prize for community college excellence? President Sandy Shugart has six big ideas about what community colleges should to enable learning, writes Fawn Johnson.

1) Anyone can learn anything under the right conditions.
2) Start right.
3) Connection and direction.
4) The college is how the students experience us, not how we experience them.
5) The purpose of assessment is to improve learning.
6) Collaboration.

Valencia’s graduation rate is nearly three times the average at large urban community colleges.  Other colleges are looking for Valencia’s “secret sauce.”

Many community colleges enroll huge numbers of students, collect the tuition and then see most of them drop out.

Valencia sacrifices its enrollment numbers (and the accompanying dollars) by turning students away who fail to register before the first day of a class. Research shows that students who register late are more likely to drop out, so Shugart says it makes sense to head those students off.

The college integrates advising with teaching. “Faculty members are expected to participate in plotting their students’ graduation paths, but each program also has an embedded full-time career adviser to track students’ progress,” Johnson writes.

Faculty members test teaching ideas in a three-year “learning academy.”  Adjuncts are paid more if they participate in developing their teaching skills.

Valencia invests most heavily in improving 15 to 20 “gateway courses” that make up 40 percent of the curriculum for first-year students.

Planning is required. “When I was in college, the idea was that your freshman and sophomore years was an exploratory time. Totally gone. It is not exploratory,” said Joyce Romano, Valencia’s vice president for student services. “Decide when you’re in the womb what you want to do.”

All students are expected to map out a graduation plan in their first semester. They must “connect” with faculty members, career advisers, tutors, and student-services staffers. Tutors—usually students themselves—know the professors personally and often sit in on classes to seek out students who might feel shy about asking for help. Tutoring centers are located in central campus areas, and they are packed.

Valencia constantly analyzes student-achievement data, but instructors are judged on their teaching, not their students’ test scores.