1.25.2012 | One of the first things many regional economic development groups do is establish a set of benchmarks of success. It’s critical, after all, to know if the efforts are having any effect, especially in an era of tight budgets.
But Geroge Erickcek, a senior regional analyst at the W.E. Upjohn Institute in Kalamazoo, Michigan, says that might be a little too hasty. While benchmarks are necessary, they come with some common pitfalls. In an article in January’s “Employment Research” newsletter, Erickcek walks readers through some of those pitfalls–as well as ways to avoid them.
The pitfalls are no doubt familiar to anyone who has worked at regional economic development: Don’t rely on stand-alone dashboards as a marker of progress; don’t go overboard with the number of indicators or progress; manage expectations, and others. But Erickcek’s concise article offers some valuable insights.
More broadly, Network members Bill Barnes and Kathryn Foster write about regional development goals in their new paper, “Regional Problem-Solving: A Fresh Look at What It Takes.” [pdf])
Among the five pitfalls, this one likely elicits the “aha moment.”
In the immortal words of Mies Van Der Rohe, less is more: Too often, Erickcek says, groups try to track too many things, getting lost in the fog as a result. “Tracking more data does not necessarily generate more clarity,” he notes.
Ultimately, information overload can be paralyzing. The dashboard, he says, should look like the dashboard in a car, not an airplane cockpit. (There are statistical methods that can weed out extraneous indicators.)
This one also likely rings a bell. Manage Expectations: Don’t expect to move the needle on such immovable boulders like “increase per capita income.” That’s not in the realm of regional economic development teams. National policies, industrial factors outside the influence of local organizations, and demographic shifts are what affect per capita income. A better route, he says, is to start small. Create strategies that “address the factors associated with the performance indicators, such as create a small business assistance program, or design customized training programs for area employers.”
After all, he notes, economic development groups can’t control the weather; they can’t demand that local firms add more jobs or workplace training. They can only tend the soil and water the plants.
“One of the greates fears I have is that an outstanding economic development program that is cost-effective and generates postive results could be terminated because it did not do the impoossible: make a noticeable bump in the area’s per capita income or employment statistiscs,” Erickcek writes.
He offers many other insights and tips, including how to avoid fixating on one indicator and mistaking output or inputs for outcomes. It’s worth a read.
And for more in-depth treatment, Erickcek also has written extensively about the topic in these publications:
- “Development of a REgional Economic Dashboard [pdf] (with Randall Eberts and Jack Kleinhenz, W.E. Upjohn, 2006)
- Social and Economic Indicators Typifying the Community’s Health (with Bridget Timmeney et al., W.E. Upjohn, 2009)
- Economic Dashboard Supplemental Report: Other Social and Economic Indicators [pdf] (W.E. Upjohn, 2007)