Wednesday, March 31, 2010

Empirically verifying models of competences

The ethics textbook I use is called Moral Competence, and its focus is just what it says it is. The book develops a five-fold model of a morally competent decision maker. Although the book is riddled with arguments--for instance claiming that virtue alone, as the book defines it, is not sufficient for moral competence--it doesn't make an effort to empirically verify the system as a whole. This makes sense for what the book is trying to do. The model is presented mostly as a framework for students to deliberate on their own moral competence and for synthesizing results and ideas from a wide range of philosophers and psychologists.

Still, something could be gained by looking at the ways that other disciplines have developed models of competences and verified them. The focus on language competence in Chomskian and post Chomskian linguistics would be a good model. A lot of emphasis is placed there on error patterns. It is a big deal that children frequently overgeneralize grammatical rules, but never simply spit out strings of words without any grammar. Something similar happens in neuropsychological models of different abilities. A stroke can impair one aspect of the ability but not another--for instance a patients ability understand written number words like "One thousand fifty four" but not Arabic numerals like "1054"--and this says something about how we model the competence.

I'm trying to figure out what the equivalent data for moral failings would be. Liszka's system encompasses many classical distinctions between kinds of moral failings, such as Aristotle's distinction of failures of knowledge and failures of will. It also brings in psychological theories of failure, like ideas about anti-social personality disorder and failures of empathy. But I still don't have a good sense of what the data to be explained is.

Are there kinds of moral failure we just don't see, akin to Chomsky's grammatical mistakes that don't get made? Are there cases of selective impairment that would help us here?

If I still had time for a research program, I would do more reading in the moral psych literature to figure this out. Right now, though, the things I've seen really work orthogonally to this issue. So now I'm just wondering out loud.

Friday, March 19, 2010

Philosophical Mission Statement #1

143 Let us now imagine the following kind of language game: when A goes to B and asks him to write down a mission statement.

The first step of this game is to look at the mission statements of other departments. —How does he get to understand their assertions?— First of all, he will be required to copy them. And here already there is a normal and an abnormal learner’s reaction. At first he might simply transcribe them, substituting the phrase “Philosophy and Religious Studies” for the term “Biology; but then the possibility of getting him to understand will depend on going on to independently write a mission statement that is appropriate for philosophy and religious studies.—And here we can imagine, e.g. that he does write the mission statement independently, but he substitutes jargon, saying “In support of the college’s wider excellence, the Department of Philosophy and Religion strives to provide mission with a solid sustainability and diversity.” And then communication stops at that point.—Or again, he makes ‘mistakes’ in adapting the statement to philosophy by writing the mission statement in the style of one or another great philosopher. Here we shall almost be tempted to say that he has understood wrong.

Wednesday, March 17, 2010

Scientist in white lab coat: 62% Perky reality TV show host: 82%

A French producer has created a one-episode reality TV show based on the Milgram experiments. (Here's a Salon article and a Agence France-Presse story). The kicker: while Milgram could only get 62% of his subjects to deliver a lethal shock, the reality TV show got 82% compliance: apparently lethal shocks delivered before a cheering crowd. There are a lot obvious of reasons for this. The crowd has to help a lot. Also, the subjects were not randomly selected--they were self selected fame hounds. Finally, I think we have to recognize that perky reality TV show hosts hold an awesome amount of authority in our society.

Monday, March 15, 2010

R. Crumb's Illustrated Genesis.

I've been reading R. Crumb's illustrated version of the Book of Genesis, and I've been meaning to blog about it for a while, because it routinely floors me. Unfortunately, every post I can think to write about it would sound like this:

You know that part when Judah hires a prostitute, and he doesn't know it is his daughter-in-law because she is wearing a veil, and he gets her pregnant, and then when she is going to be burned at the stake for being a whore she says "Judah is totally my baby daddy, and here I've got his ceremonial seal to prove it."

That's fucked up, man.


Crumb totally made the right decision to simply illustrate Genesis at face value. He even portrays God as a man with a long white beard. By taking Genesis at its word, Crumb transforms his amazing comics mojo into a conduit for the total fucked-up-edness that is the Bible.


Also fucked up: the fact that Tamar (the daughter-in-law) is pretending to be some kind of temple prostitute and that her fee is a baby sheep.

Saturday, March 06, 2010

Alien Envy


Alien Envy
Originally uploaded by rob helpychalk.

from Tedra: National Geographic has pictures of rare black penguin. http://bit.ly/cTdqkW. PK points out that its standing on rock beach & hypothesizes that color could prove 2B advantageous adaptation & that other planets would envy earth for having such a cool penguin.

Tuesday, March 02, 2010

"Teacher evaluations have little to no impact on the quality of education or student learning"

The research is resoundingly consistent: Teacher evaluations have little to no impact on the quality of education or student learning (Colby, Bradshaw, Joyner, 2002; Flesher, Sommers, Brauchle, 2000; Frase & Streshly, 1994; Peterson, 2000; Cousins, 1995; Joint Committee, 2008; Shinkfield & Stufflebeam, 1995; Stiggins & Bridgeford, 1985).


That one of the early statements made in this article by Lindsay Noakes in the Journal of Multidisciplinary Evaluation. The most striking thing here is that Noakes is referring to all forms of teacher evaluation, including not just student evaluations but interviews, competency exams, student performance and classroom visits by peers and supervisors. The articles she sites are about K-12 teacher evaluation, but it looks like the complaints will carry over to the community college level. Surveys are criticized for not being tested for their validity (whether they measure what they say their measure) and reliability (whether they give similar results in similar situations). Class visits are criticized for the variety of subjective biases that come form the observer. Her bottom line rings true for me: "This is either because teacher evaluations cannot or, more likely, are not being used for the purpose of teacher improvement."

Noakes' article actually isn't that interesting apart from what it cites. She basically lists problems that other people have identified with teacher evaluation, and then borrows a checklist for good evaluations from someone else and says it should be applied to teacher evaluations. Noakes is a grad student at the Evaluation Center at Western Michigan University. In any case, here are the citations for the quotation above.

Colby, S. A., Bradshaw, L. K, & Joyner, R. L. (2002, April). Perceptions of teacher evaluation systems and their impact on school improvement, professional development, and student learning. Paper presented at the meeting of the American Educational Research Association, New Orleans, LA. (ERIC Document Reproduction Service No.
ED464916) Link.

Cousins, J. B. (1995). Using collaborative performance appraisal to enhance
professional growth: A review and test of what we know. Journal of Personnel Evaluation in Education 9(3), 199-222. Link

Flesher, J., Sommers, C., & Brauchle, P. (2000). Enhancing instructor evaluation. Performance Improvement, 39(8), 26-29. Link

Frase, L.E., & Streshly, W. (1994). Lack of accuracy, feedback, and commitment in
teacher evaluation. Journal of Personnel Evaluation in Education, 1, 47-57. Link

Joint Committee on Standards for Educational Evaluation. (2008). The personnel evaluation standards (2nd ed.). Thousand Oaks, CA: Corwin. Link

Peterson, K. D. (2000). Teacher Evaluation: A comprehensive guide to new directions and practices. Thousand Oaks, CA: Corwin. link

Shinkfield, A. J., & Stufflebeam, D. L. (1995). Teacher evaluation: Guide to effective practice. Boston: Kluwer. link

Stiggins, R. J., & Bridgeford, N. J. (1985). Performance assessment for teacher
development. Educational Evaluation and Policy Analysis, 7(1), 85-97 Link