All posts by Pantelis Vassilakis

Language and Thought: Explanation and Understanding

Conventional wisdom views language as a device through which thought is actualized into spoken or written word, as a tool that simply assists in the representation of something that precedes it. To paraphrase a science mentor and dear friend of mine, “We do not create the world through language. Language and explicit knowledge are the poor symbolic systems we use to try and communicate about the real creator of the world: implicit rules and knowledge that are metasymbolic.”

I disagree with this assessment and see an important, fundamental feedback between metasymbolic, implicit rules and knowledge on one hand and language on the other. Understanding language formally as a symbolic, self-contained system that is governed simply by syntactical and grammatical rules is narrow and fails to recognize that language does not only express thought but also guides it. Such a failure underestimates language’s potential to both enrich and stifle thought. With this in mind, the belabored arguments below are meant to support a single simple statement:
The task of developing rich (and ideally multi-) language skills should be undertaken not only by language or creative writing majors but by all, since one’s level of linguistic skill provides the basis for critical and creative-thinking development, which is fundamental to all human endeavors.

By the time in our lives that critical thinking and reflection have become prominent aspects of our being, both the use and understanding of language have themselves become implicit, creating the illusion of a given language’s “naturalness.” Those who speak and write fluently in more than one language often discover aspects of thought and feeling that are much more accessible in one linguistic scheme or another, destroying this illusion. I, for one, think and feel differently, express myself differently, and focus on different aspects of my experiences depending on whether I "function" in Greek, English, or German. I can think of several words that exist in one language and not in another (especially words with subtle shades of meaning) that not only suggest differences in how thoughts are expressed but also support the formation of different future thoughts. For example, there is no Greek noun that can capture the meaning of the English "privacy," while the English "hospitality" and the equivalent Greek "filoxenia" (literally and clumsily translated as “friendship towards strangers”) clearly put emphasis on different aspects of the concept they describe. In both cases, the linguistic differences reflect and support attitudes towards privacy and guests that are fundamentally different between the two traditions.

The drawbacks of formal approaches to language come to the forefront especially when trying to address prosody and metaphor, linguistic devises that account for a large portion of communicated meaning and of language use and creation in general. All the  formal “substitution” theories of metaphor accomplish is to create a model that is “Ptolemaic” in its complexity and uselessness, trying too hard to stick to existing ideas, simply because embracing different ones would require thinkers to enlist the help of unfamiliar intellectual traditions. But I will reserve this topic for a future post.

Winograd and Flores (1986) observe that even sophisticated linguists are puzzled by the suggestion that the basis for the meaning of words and sentences cannot ultimately be defined in terms of an objective external world. Words correspond to our intuition about “reality” simply because our purposes in using them are closely aligned with our physical existence in a world and with our actions within it. But this coincidence is the result of our use of language within a tradition (or as some biologists may say, of our “structural coupling” within a “consensual domain”).  As such, this reality is based on language as much as it reflects it.

Ultimately, language, like cognition, is fundamentally social and may be better understood if approached as a “speech act” rather than a formal symbolic system, a move that introduces the importance of “commitment,” as described in speech-act theories of Austin and others. Both language and cognition are relational and historical, in the larger sense of the word. As Winograd and Flores note, the apparent simplicity of physically interpreted terms such as "chair" is misleading and obscures the fact that communication through words such as "crisis" or "friendship" cannot exist outside the domain of human interaction and commitment, both of which are intricately linked to language (as speech act) itself. This apparently paradoxical view that nothing (beyond simple descriptions of physical activity and some sensory experience) exists except through language describes the fundamentally linguistic nature of all experience and motivates me to approach moments of understanding (i.e. “understanding” experiences) as the achievements of explanatory (i.e. linguistic) acts.

The power of language to create, rather than simply express, thought and meaning may actually be more easily recognized through an examination of the relationship between explanation and understanding. The writings of Gadamer (1960), Ricoeur (1991), and others, have expanded our conception of explanation, illustrating that it cannot be approached as simply the result of and subsequent to understanding. 

Explanation and understanding are both products of thought, “moments” of knowing that constantly interact in a productive feedback. This feedback is manifested as communication, reflection, etc. and has explanation, rather than understanding, at its center. In this scheme, explanation is linguistic in nature (whether as discourse—with someone or within—or text) and understanding is cognitive/phenomenological (whether as thinking or thought). Explanation (interpretation) is not seen as a post-facto supplement to understanding but as belonging to understanding’s inner structure, an integral part of the content of what is understood. I see Gadamer’s efforts to recover the importance of application (“understanding always involves something like applying the text to be understood to the interpreter’s present situation’”) as evidence that application is the ultimate explanatory act. As an "explanatory achievement," understanding is the fruit of explanation, "being realized not just for the one for whom one is interpreting but for the interpreter himself." This essentially argues that understanding is “explaining to self.”

If, along with Gadamer, we conceive every statement as an answer to a question, what we understand as a statement’s meaning is an answer, an explanation. And even though the moment of understanding often seems to occur without explicit interpretation/explanation, it is always preceded by an explanation to self, motivated by the hermeneutic question that has to be asked and be answered in any event of understanding.

The understanding/explanation dialectic parallels the one between thought (understanding) and language (explanation). A thought that cannot be “explained” linguistically (to self or others) is better approached as intuition, not as understanding. The revelatory moment of experiencing a work (linguistic or otherwise) that manages to say to us what we could only intuit is what transforms our intuition into thought, helping us escape the prison of our previous language (and thoughts), and being verbally reconstituted through our new language, enriched through our encounter with the work. Our interaction with the work gives us the tools to explain our intuition to ourselves and turn it into a thought, with our newly found understanding being the culmination of an explanatory moment, however “implicit” or “concealed” this moment may seem.

This is just a blog post rather than a piece of academic writing, so I will allow myself the luxury of closing with strong words: Language must be recognized as our means of formulating thought, with all understanding viewed as the result of explanatory moments whose ontology is linguistic. Explanation and understanding, in turn, must be recognized as being tied into a continuous and dynamic feedback loop that develops through the initiation of acts of explanation. With Winograd and Flores, I reject cognition as the manipulation of knowledge of an “objective world,” recognize the primacy of action and its central role in language, and conclude that it is through language that we create our world.

 

References

Gadamer, H. G. (1960). Truth and Method. 2nd edition (1989). New York: Continuum.

Ricoeur, P. (1991). A Ricoeur Reader. M. J. Valdes (ed.). Toronto: University of Toronto Press.

Winograd, T. and Flores, F. (1986). Understanding Computers and Cognition: A New Foundation for Design. Indianapolis, IN: Addison-Wesley, Pearson Foundation.

ADCL or “Do We Really Need Another Acronym?”

Acronyms are introduced regularly in many contexts, not only to facilitate repeated reference to certain terms but also to imply wide acceptance of and add an air of importance to proposed ideas, processes, or methodologies. Instructional design loves acronyms. The buzz-cronyms of the hour include BD, PBL, TBL, and LCI (or LCT) (clues below).

Contributing to this long list, and in many ways consolidating it, I propose ADCL or Assessment-Driven Collaborative Learning. Details will be published in one of the 2009 volumes of Symposium, the journal of the College Music Society. In the meantime, here is a teaser:

ADCL incorporates features of backward design and project- and team-based learning in contexts that highlight student responsibility, all materialized through a series of graded team projects and enhanced by instructor guidance and feedback throughout the project-drafting process. Such design supports a) student motivation and engagement, b) meaningful instructor-to-student and student-to-student interactions, c) instructor- and peer-led learning, and d) formative and summative assessment, by wrapping a course around a single set of manageable, self-contained, resource-supported, and interrelated group assignments. Group assignment responses are drafted and submitted online in instructor-moderated discussion forums.

Evidence, collected over two years of using this technique and formally comparing it to more traditional instructional methods, suggests that ADCL maximizes a course’s learning impact and utilizes the instructor’s expertise and time most effectively and efficiently.

So no, we may not need one more acronym, but I believe we can do with one more effort to improve our students’ learning.

More next time…

Open Course Repositories Online or The Best Things in Life Are Free

Along with the general increase in the number and availability of online resources, educational or otherwise, the last decade has seen a growing trend towards developing complete post-secondary-education courses that can be made available online for free. In contrast to the widely varying quality and the general absence of systematic and educational-research-backed course-design standards that characterize online courses offered at a premium from a growing number of traditional or exclusively online higher-education institutions, the quality and standards of these free courses is consistently high—probably a reflection of the kinds of faculty and institutions willing to devote time and expertise to free education.

Examples

I) MIT’s Open Course Ware (OCW), established by the Massachusetts Institute of Technology in 2002, currently offers over 1,800 online courses that are enriched with multimedia content and teach thirty-five subject areas within the arts, sciences, and humanities. Being created exclusively by MIT, OCW is backed by MIT’s commitment for permanent updates but is not open to user contributions. It is essentially an electronic and multimedia-enriched version of almost all of MIT’s academic curriculum, including lecture notes, video lectures, exams, etc., offered for free, offering no certification or credit, requiring no registration, and, as indicated on the site, providing access to “materials that may not reflect the entire content of a [given] course.”

II) In a slightly different approach, Carnegie Mellon’s Open Learning Initiative currently offers an eclectic list of only about a dozen courses, which, however, are fully and exclusively developed for online students and are supported by ongoing educational research addressing course design and outcomes. Faculty from all over the world can create a free account and use the repository’s tools to create online courses that can be offered for free or at a nominal fee if credit is required. An educational initiative of a much larger scope than MIT’s OCW, the Open Learning Initiative organizes symposia, maintains pedagogical and education-technology blogs, and offers workshops on using and customizing existing-courses and on developing new, effective online courses.

III) Connexions is a collaborative, free, scholarly content archive that seems to share useful features from both resources discussed above, so I will be spending some more time on it.

The Connexions project started at Rice University in 1999, with the first non-Rice Connexions course contributed by the University of Illinois in 2002. Similarly to MIT’s OCW, Connexions has grown extensively and currently holds over 4,500 course modules, covering most typical disciplines and topics addressed in higher education. Similarly to Carnegie Mellon’s Open Learning Initiative, new content is welcome and can easily be created by faculty from around the world, following a simple registration process.

The resource offers full courses (called ‘collections’), individual course modules, or stand-alone learning activities. Materials and learning activities are very well aligned, while remaining modular for flexibility in course customization. Based on my use of the resource, in order to achieve maximum effectiveness instructors are best off mixing and matching modules and activities from multiple collections and possibly supplementing them with additional (especially multimedia) materials. Connexions holdings are often linked to relevant course Web sites within the authors’ academic institutions, providing additional resources and context for understanding the materials.

The numerous items related to music (my area of expertise) are listed under “Arts,” with thirteen of the seventeen “collections” and approximately a third of the over four-hundred modules within Arts addressing music or sound-related topics. Items related to acoustics can also be found under “Science and Technology,” and, following a recent partnership with the Institute of Electrical and Electronic Engineering (IEEE), Connexions will also be developing a set of signal-processing educational modules and courses.

In its vast majority, the content is accurate, well presented, supported by references to relevant literature, and occasionally enhanced through multimedia resources. Depending on subject area, special plug-ins may be required, all of which are downloadable from within the relevant learning object’s page. Usability features may change slightly with each course and contributor, but all courses/modules checked are clearly organized and very easy to use. The repository itself is also well organized and visually appealing, and it has clear instructions for use when necessary. Although not formally peer-reviewed, the collections are monitored by an editorial team and an oversight board, helping maintain high content standards.

The quality and learning impact of the resource was recently recognized by Harvard University’s Berkman Award (Berkman Center for Internet & Society), presented to Connexions founder and Rice University professor R. Baraniuk for his role in creating the repository. The learning impact and sustainability of this and other open educational resource repositories was addressed in a recent article from the Interdisciplinary Journal of E-Learning and Learning Objects.

We are all usually wary of anything offered for free, and for good reason. When it comes to free education, however, there are serious reasons (e.g. motivation of those offering it) and evidence (see above) that support rehashing the cliché: “the best things in life are free!” I have personally found the free resources discussed very useful and plan to become a contributor in the near future.

LPs Versus CDs: An Unnecessary (and Often Annoyingly Ignorant) Debate

I am truly at a loss as to why we are still arguing about this, but we somehow still are! (See “Retailers Giving Vinyl Records Another Spin.”) Here is the quick answer: digital-audio techniques and media can capture and reproduce sonic events far more faithfully than any analogue technique and medium. Note that the focus of the above statement is fidelity not preference, a distinction that most “LPs versus CDs” debates wrongly blur.

Old news

More than twenty years have gone by since the first time CD sales surpassed those of LPs, and several highly qualified acousticians and engineers have since weighed in on the LPs-versus-CDs topic, outlining in numerous books, scholarly journal articles, and presentations the mathematical, acoustical, signal-processing, and perceptual issues involved. (See the relevant, well-written article on Wikipedia for a partial bibliography. See, also, an intelligent talk on the topic by Princeton University’s Paul Lansky.) Some of these experts have also sent relevant letters to the popular press or have published blogs and other online resources. (See this post by analog-integrated-circuit designer Mike Demler.)

Still, self-proclaimed “experts” and die-hard lovers of popular myths, who seem to approach knowledge almost exclusively through what C. S. Peirce, in the 19th century, called “the method of tenacity” (holding on to one’s already established beliefs at all cost), insist on keeping the analogue-versus-digital-sound debate alive. See, for example, this thread on Audiokarma.org’s discussion forum, which includes a typical example of the types of arguments used by LP advocates: “I know vinyl is better because… it just is” (emphasis in the original!).

I will not waste any time here repeating in detail the arguments for the superior audio fidelity of CDs versus LPs. Interested readers can find more information through a relevant Google search, assuming they know how to weed through the returned results and evaluate Internet resources. (e.g. Does the author identify him/herself? What are his/her credentials? Are the arguments supported by references to credible, peer-reviewed sources? Are the sources of information properly cited? etc.)

I will simply outline the inherent and unsurpassable limitations of analog media, such as vinyl LPs, and highlight an important distinction between fidelity and preference that seems to be overlooked in digital-versus-analog debates.

Limitations of LPs

The mechanical nature of sound-signal capture and reproduction in LPs and the associated issues of inertia, momentum, and interference impose frequency and dynamic response limits (i.e. limits in both the range and fineness by which a signal’s frequency and amplitude content can be captured and reproduced without interfering with adjacent signals) that constitute an unavoidable fidelity bottleneck within the medium. CDs completely bypass these issues thanks to optical methods of sound-signal capture and reproduction, assuming appropriate digitization (sampling-rate and bit-depth choice), storage (CD-surface and surface-coating choice), and handling (CD-surface protection during use to minimize the need for digital error correction).

Fidelity versus preference

Advocates of LPs and other analog sound media often cite the analog sound’s greater “warmth,” “smoothness,” and “fullness” as the main reasons for choosing analog over digital. Interestingly, these subjective sound-quality characteristics are related to acoustic side effects imposed on live, sonic events by the analog media themselves. Preference for such sonic qualities may be based on familiarity and habit (having grown up listening to music exclusively through analogue media, showing a conditioned preference towards the “familiar”) or may constitute a conscious aesthetic choice (intentionally altering a sonic event, through the sound-quality distortions introduced by analog media, to achieve a given aesthetic result). Regardless of the reasons behind some listeners’ preference for the sound-quality distortions introduced by analog media, the fact is that the sound quality carried by such media is exactly that: distorted. Preference for analog over digital and the other way around occupies an inherently subjective, gray area, and discussions on it can and will continue. However, when it comes to sound fidelity (how accurately an acoustic vibration is represented by a sound signal) it’s just black and white, with digital coming out the clear and undisputable winner.

MP3s and the Degradation of Listening

Don’t get me wrong! I own three iPods, which I use extensively and absolutely adore for their portability and other obvious advantages. I, of course, use them differently than most listeners. (If you are lazy or impatient, feel free to jump to the bottom of the page and read how.) Most listeners use mp3 players and mp3 files in ways that severely degrade sound quality and eventually deteriorate the listener’s ability to even tell the difference between good and bad sound quality. But more on this a little later.

Disclaimer: For the cynics amongst you, I am not sponsored by any record label trying to boost CD sales; I could actually not care less. All the information below is not product-specific, is based on facts, and is common knowledge to anyone with a basic understanding of the physics of sound, digital sound processing, hearing physiology, and auditory perception. Ignore at your own risk!

CD sound quality

First, let me address some fundamental issues related to the relationship between CD sound data rates and sound quality.

CD quality is usually described in terms of:

  • sampling rate (44,100 samples/sec.),
  • bit rate (16 bits), and
  • stereo presentation.

Doing some simple math, we can figure of that CD-quality sound corresponds to a data rate of 1411 kbits/sec. (44,100 * 16 * 2 = 1,411,200 bits/sec. = ~1411 kbits/sec.) Sampling rate determines the upper frequency limit (corresponding, in general, to timbre, or sound quality) that can be faithfully represented in a digital sound file (about half of the sampling rate). Bit rate determines the dynamic range (i.e. difference between the softest and strongest sound) that can be faithfully represented in a digital sound file (~6 dB per bit).

Given the maximum frequency and dynamic range of safe and functional human hearing (~20 kHz and ~100 dB respectively), CD-quality digital sound is very close to the best sound quality we can ever hear. There have been several valid arguments put forward, advocating the need for sampling rates higher than 44,100 samples/sec. (e.g. 98,200 samples/sec.), bit rates higher than 16 bits (e.g. 24 or 32 bits), and more than two channels (e.g. various versions of surround sound). Depending on the type of sound in question (e.g. the sound’s frequency/dynamic range and spatial spread) and what you want to do with it (e.g. process/analyze it in some way or just listen to it), such increases may or may not result in a perceptible increase in sound quality. So for the vast majority of listening contexts, CD-quality sound (i.e. 1411 kbits/sec. data rate) does correspond to the best quality sound one can hear.

Compressed sound quality

Now, let’s move to compressed quality sound, whether in mp3, iPod, Real, or any other format.

Every sound-compression technique has two objectives:

a) to reduce a sound file’s data rate and therefore overall file size (for easier download and storage) and

b) to accomplish (a) without noticeably degrading the perceived quality of the sound.

Sound-compression algorithms basically remove bits from a digital sound file and select the bits to be removed so that the information that will be lost will not be perceived by listeners as a noticeable loss in quality.

Compression algorithms base their selective removal of information from a digital file on three perceptual principles:

  1. Just noticeable difference in frequency and intensity:
    Our ears’ ability to perceive frequency and intensity differences as pitch and loudness differences respectively is not as fine grained as the frequency and intensity resolution supported by CD-quality sound. So it is possible to selectively remove some relevant information without the listeners noticing their removal.
  2. Masking:
    Strong sounds at one frequency can mask soft sounds at nearby frequencies, making them inaudible. It is, therefore possible to remove digital information representing soft frequencies that are closely surrounded by much stronger frequencies, without the listeners noticing the removal, since they would not have been able to hear such soft sounds in the first place.
  3. Dependence of loudness on frequency:
    Even if different frequencies have the same intensity they do not sound equally loud. In general, for a given intensity, middle frequencies sound louder than high frequencies, which sound louder than low frequencies. Given the phenomenon of masking described above, this dependence of loudness on frequency allows us to remove some soft frequencies even if they are further away from a given strong frequency, providing an additional opportunity to remove bits (information) from a digital file without listeners noticing the loss. In addition, the dynamic range of hearing is much lower for low than for middle and high frequencies and may be adequately represented by ~10 versus 16 bits, offering one more possibility for unnoticeable data-rate reduction.

Different compression algorithms (e.g. mp3, iTunes, etc.) implement the above principles in different ways, and each company claims to have the best algorithm, achieving the most reduction in file size with the least noticeable reduction in sound quality.

Digital music downloads and the stupefaction of a generation of listeners

Regardless of which company and algorithm is the best, one thing is certain. No matter how the previously discussed principles are implemented and no matter how inventive each company’s programmers are, there is no way for the above principles to support the over 90 percent reduction of information required to go from a CD-quality file to a standard mp3. In other words, reducing data rates from CD quality (1411 kbits/sec.) to the standard downloadable-music-file quality (128 kbits/sec.) is impossible without a noticeable deterioration in sound quality.

In fact, the 139th meeting of the Acoustical Society of America devoted an entire session on the matter, with multiple acousticians and music researchers presenting their perceptual studies on the relationship between compression-data rates and sound quality. Based on these and other, more recent, relevant works, it appears that data rates below ~320 kbits/sec. result in clearly noticeable deterioration of perceived sound quality for all sound files with more than minimal frequency, dynamic, and spatial spread ranges. (E.g. listening to early Ramones at low or high data rates will not make as much of a difference as listening to, say, the Beatles’ “Sergeant Pepper” album.) Such low data rates cannot faithfully represent wide ranges of perceivable frequency, intensity, and spatial-separation changes, resulting in ‘mp3s’ that include only a small proportion of the sonic variations included in the originally recorded file.

As data rates drop, there is a gradual deterioration in

a) frequency resolution (loss of high frequencies, translated as loss of clarity),

b) dynamic range (small, dynamic changes become noninterpretable by the compressed file, resulting in flatter ‘volume’ song profiles), and

c) spatial spread (loss of cross-channel differences, resulting in either exaggeration or loss of stereo separation).

When this degradation of sound quality is combined with the fact that most young listeners get their music only online, what we end up with is a generation of listeners that is exposed to, and therefore ‘trained’ in, an impoverished listening environment. Prolonged and consistent exposure to impoverished listening environments is a recipe for cognitive deterioration in listening ability. That is, in the ability to focus attention on and be able to tell the difference between fine (and, if we continue this way, even coarse) sound variations.

Such deterioration will not only affect how we listen to music but also sound perception and communication in general, since our ability to tell the difference between sound sources (i.e. who said what) and sound source locations (i.e. where did the sound come from) is intricately linked to our ability to focus attention on fine sound-quality differences.

What you should do

a) Do not listen to music exclusively in mp3 (or any other compressed) format.
Go to a live concert! Listen to a CD over a good home sound system, set of headphones, or car stereo!

b) Unless a piece of music is not available in another format, do not waste your money on iTunes or any other music download service, until such services start offering data rates greater than 300 kbits/sec.

c) When you load CDs on your iPod or other devise, select the uncompressed conversion rate (e.g. .wav or .aif formats). If you don’t have the hard disk space on your player to do this, convert at the highest available data rate (currently 320kBits/sec on iTunes).

d) Finally, get a good pair of headphones for your mp3 player! The headsets given out with iPods and most mp3 players are of such bad quality that they essentially create a tight bottleneck to the quality of your digital files and players. The response of these headphones has been designed to match the low quality of popular iTunes or other mp3 files (128 kbits/sec). Mp3-player manufacturers do this for two wise (for them) reasons:

i) poor quality headsets are cheap to produce and good enough to reproduce the poor quality mp3s files you are fed, and

ii) poor quality headsets prevent you from creating/requesting music files at higher data rates because when listening over such headphones you cannot even tell the difference between good and bad sound quality.

Well, what can I say? Wake up and listen to the music!

Applying the Business Model to Education: Part II

Back in September, I wrote a post addressing some drawbacks of applying the business model to education. In the meantime, and thanks to Don Casey, Dean at DePaul’s School of Music, I came across Jim Collins’s Good to Great and the Social Sectors: Why Business Thinking is Not the Answer. This is a monograph accompanying Collins’s book Good to Great: Why Some Companies Make the Leap… and Others Don’t. I found this monograph extremely useful in the way it articulates and organizes both the problems involved in applying the business model to education.

Below, I have constructed what is hopefully a meaningful collage of quotes from Jim Collins’s work (organized based on the monograph’s sections), making my arguments through his words and concluding by posing some questions.

(Introduction)

“We must reject the idea—well-intentioned, but dead wrong—that the primary path to greatness in the social sectors is to become ‘more like a business.’ Most businesses—like most of anything else in life—fall somewhere between mediocre and good. Few are great. When you compare great companies with good ones, many widely practiced business norms turn out to correlate with mediocrity, not greatness. Business Insolvency Advice and Liquidation Services can help you maintain a healthy financial outlook and ensure your business stays on track.

“The critical distinction is not between business and social [e.g. education], but between great and good. We need to reject the naive imposition of the ‘language of business’ on the social sectors, and instead jointly embrace the language of greatness.” 

(Calibrating success without business metrics)

“The confusion between inputs and outputs stems from one of the primary differences between business and the social sectors. In business, money is both an input (a resource for achieving greatness) and an output (a measure of greatness). In the social sectors, money is only an input and not a measure of greatness.” Consider consulting with professionals from ipa london to gain insights into financial strategies tailored to your specific sector.

“It doesn’t really matter whether you can quantify your results. What matters is that you rigorously assemble evidence—quantitative or qualitative—to track your progress. If the evidence is primarily qualitative, think like a trial lawyer assembling the combined body of evidence.”

“In the social sectors, performance is defined by results and efficiency in delivering on the social mission… [A great organization] makes such a unique contribution to the communities it touches and does its work with such excellence that if it were to disappear, it would leave a hole that could not be easily filled by any other institution… [It] can deliver exceptional results over a long period of time, beyond any single leader, idea, … or well-funded program in education and also the numerous of resources which can be used for this such as services of Jason Linett professional hypnotist which can help boost a business and more.

(Getting things done within a diffuse power structure)

“Social sector leaders are not less decisive than business leaders as a general rule; they only appear that way to those who fail to grasp the complex governance and diffuse power structures common to social sectors.”

“In executive leadership, the individual leader has enough concentrated power to simply make the right decision… Legislative leadership [on the other hand] relies more upon persuasion, political currency, and shared interests to create the conditions for the right decisions to happen. And it is precisely this legislative dynamic that makes Level 5 leadership particularly important to the social sector.”

“True leadership only exists if people follow when they have the freedom not to.”

“There is an irony in all this. Social sector organizations increasingly look to business for leadership models and talent, yet I suspect we will find more true leadership in the social sectors than the business sector.”

For a seamless business registration application hong kong, get help from Acclime to navigate the process efficiently and ensure compliance with all necessary requirements.

(Rethinking the economic engine without a profit motive)

“[The Hedgehog Concepts of great companies reflect] deep understanding of three intersecting circles: a) what you are deeply passionate about, b) what you can be the best in the world at, and c) what drives your economic engine… A fundamental difference between the business and social sectors [is that] … the third circle shifts from being an economic engine to a resource engine. The critical question is not ‘How much money do we make?’ but ‘How can we develop a sustainable resource engine to deliver superior performance relative to our mission?’”

“The resource engine has three basic components: time, money, and brand. ‘Time’ refers to how well you attract people willing to contribute their efforts for free, or at rates below what their talents would yield in business. ‘Money’ refers to sustained cash flow. ‘Brand’ refers to how well your organization can cultivate a deep well of emotional goodwill and mind-share of potential supporters [as well as the respect and admiration of those demanding the services offered].”

I will conclude this post by posing the following questions:

a) Could the recent trend to assess educational institutions’ performance based on business models and metrics reflect more our degree of familiarity with such models/metrics and less their fitness to the task?

b) Assuming that an educational institution’s/department’s mission is systematic, rigorous, and representative of its members’ passions, shouldn’t assessment of the institution’s/department’s success be tightly linked to achieving excellence relative to this mission rather than to some easily measurable bottom line that is irrelevant to the mission?

c) Regardless of whether or not we approach education altruistically, isn’t it about time we became honest enough to modify either our altruistic missions to match our bottom-line assessments or our assessments to match our socially conscious, rather than business-based missions?

Soft (Arts) vs. Hard (Sciences/Technology) Education: Imagination vs. Reason

Both the low marketability of arts degrees and the low salaries of arts educators in our society, when compared to the marketability of degrees and salaries of educators in science or technology topics, reflect an attitude towards the arts that sees them as accessories to our lives, good mainly for entertainment, pleasure, or escape. This attitude frequently undermines arts education funding and is, for some, due to the admitted difficulty non-artists and artists alike face when trying to assess success in arts education and production with measures that make sense to and can be appreciated by “non believers.”

Assessing arts education outcomes (Hanna, 2007)

To this end, Dr. Wendell Hanna (San Francisco State University) recently published a well-written and organized article on the applicability of the new Bloom’s taxonomy to arts education assessment [Hanna, W. (2007). “The New Bloom’s Taxonomy: Implications for Music Education.” Arts Education Policy Review, 108(4): 7-16.]. The first section of the article offers an insightful and concise outline of the significance of assessing music education outcomes and of the history and current state of Bloom’s taxonomy as an education-accomplishment assessment tool. It is followed by a meticulous and convincing (even if a little tedious at times) set of arguments for the way music education activities and national standards fit within the new Bloom’s taxonomy.

Hanna (2007) effectively accomplishes her principle goal, to show that:

Music education functions within and contributes to the same types of knowledge acquisition and cognitive processes, and its outcomes can be assessed using conceptually the same standards and tools as other educational areas that deal with topics traditionally more “respected,” “objective,” and widely accepted as beneficial to individual and social behavior and success.

Does “high assessment” translate to “high value?”

Whether the above conclusion can support claims for the need to keep music education in schools is not as clear to me as it seems to be to the author. Based on her concluding sections, Hanna seems more interested in promoting the usefulness of a new, uniform, and standardized assessment tool than she is in arguing for the general value of musical accomplishments. The goal of this assessment tool is to make communication of musical accomplishments among “music lovers” and between music lovers and non-music administrators easy, efficient, and consistent with concepts non-experts are familiar with. However, defending the value of music education in promoting the individual and social development of students is, in my opinion, a most pressing issue, as the way it is resolved will determine whether or not accomplishing the goal set in Hanna’s paper is of any consequence.

For example, even the process of systematically learning how to knit can be made to fit, to some degree or another, the knowledge acquisition and cognitive processes outlined by Bloom’s taxonomy. This offers us useful ways to assess what processes have been used and to what end and degree of success. Such an exercise, however, will not answer the question of whether the specific “end” in question is “valuable,” “respectable,” and useful to the individual and the society beyond the limited bounds of the activity itself. Algebra, biology, geometry, and all the other “respectable” educational subjects are not respectable simply because students end up learning how to solve equations or properly identify a frog’s internal organs. Rather, they are valued because of what one can contribute to society thanks to her advanced mathematical and scientific skills.

Precisely what these contributions may be is not made explicitly clear, but their value is implicitly accepted as being significant within our culture. On the other hand, what a student can contribute thanks to how well educated she is in music is even less clear and, largely, not accepted as valuable. It seems to me that, before one can appreciate how good a student has become in music and how consistently we can assess her accomplishments based on a standard tool, we must address the question, “Why should anyone become good in music?” The typical response: “for no good reason beyond entertainment and escape,” reveals an attitude that threatens to make efforts like Hanna’s ultimately inconsequential.

The cognitive significance of art & imagination vs. reason

In my opinion, the way to go is to systematically and convincingly argue for the cognitive significance of art in general and music in particular—a non-trivial task that is beyond the scope of the present post. To get things going, however, I would like to briefly assess the longstanding, conventional opposition between imagination and reason, which, I believe, is behind our difficulty to appreciate art’s cognitive significance.

Bear with me for one more paragraph, as I will be tracing an arguably problematic rational consequence of such opposition.

Common sense understands imagination as a mental activity that deals with things that are not really there. It is opposed to reason, which is consequently supposed to be dealing with things that are really there. At the same time, the observation that not all future events can be predicted based solely on past and present observations indicates that future things must include things that do not already belong to the past or present. If the future includes things that are not present (i.e. are not really there) or past (i.e. have never really been there) then reason, by definition, cannot address it. Such a limitation severely undermines the importance of reason to our lives, by stripping from it the power to, in any radical way, influence our outlook. The only way reason can address future things is by making believe that such things—things that are not really there—are present, so that it can subject them to determinate and reflective judgment. In other words, in order for future to be reasoned with it first has to be imagined. The conventional opposition between imagination and reason and the accompanying assumption of reason’s superiority leads, therefore, to a curious and paradoxical “reason” that is superior to imagination, but impotent without it.

Until convinced otherwise, I, for one, will keep imagining.

Applying the Business Model to Education: Current Failures, Future Possibilities

In recent years, there has been a growing trend to view educational institutions as businesses, assessing them in terms of business models and measures. Just as individuals seek guidance from a life coach and financial advisor to navigate their personal and financial goals, universities and schools are increasingly turning to specialized consultants and experts to optimize their operations and achieve their educational objectives. Consistent with such models, institutions are required to justify their existence based not on criteria such as quality of faculty or resources, but on whether they:

  1. satisfy a current demand,
  2. anticipate a future one,
  3. keep their clients happy,
  4. continuously increase product offerings (courses/programs) and sales (enrollment), and
  5. positively balance their books.

This trend arose partially from the need to move away from the subjective and over-emotional manner in which education has been traditionally approached (vague references to intellectual maturity and greater good) and was encouraged by the increasing reliance of educational institutions on state or private Online Broker “investors,” who demand increasingly measurable, objective, short-term “return on investment.” Also, in my experience, taking the guidance of a seasoned broker is indeed a prudent choice. For those seeking to make an informed decision, I would urge you to get expert advice now. The expertise that I encountered significantly influenced my journey, making it a rewarding and enriching experience.

Conceptual and Practical Problems with the Business Model in Education

In the business model of education, the institution is viewed as the “service provider” and the students are viewed as the “clients.” The only tangible and measurable components of the transactions between the two in the current version of the model are the fees the students pay to attend an institution and the degree (“product”) students receive at the end of their residency at the institution. Leverage the potential of free seo tools to fine-tune your website and propel it to new heights.

However, unlike any other business transaction in the US, payment of the fees does not guarantee that the “clients” will:

  1. always be right (by definition, the opposite is most often the case),
  2. receive the end product (the “provider” actually delivers the “product” based on criteria other than fee payment),
  3. be able to return the end product for a refund, exchange, or credit if it does not fulfill the expectations raised by the institution (there is no system in place to hold providers accountable for their products), or
  4. get a refund if they eventually change their minds and decide not to attend the institution.

To stay consistent with their current business model version, institutions would have to either:

  1. provide degrees upon payment (I do get several emails per day advertising just that), eliminating in the process the degrees’ value and therefore the institutions’ reason for existence or
  2. issue refunds to students that do not earn the degrees, permitting noncommittal students to take up resources and bankrupt their business.

If you find yourself in this position, Seeking Business Insolvency Guidance can provide the support and solutions needed to navigate these challenges and ensure the sustainability of your business.

Hypothetical Solution

One could envision a two-stage model in which the provider-client roles switch half way through the paying-fees-receiving-degree process.

Stage 1: Institutions as Service Providers, Students as Clients

In this stage, students pay a fee. In return they get access to resources that facilitate and structure learning, such as:

  1. qualified, accomplished, passionate instructors,
  2. comprehensive, manageable, and timely curricula, and
  3. physical and virtual facilities that promote retrieval and dissemination of high quality information related to the educational area they paid for.

These resources are clearly spelled out in the institution’s mission/advertising/contract with their “clients” (through admissions policies, for example). After the service has been provided (e.g. at the end of each quarter), clients have the right to evaluate the service they received and examine whether it fulfilled the admissions contract. If it has not, they should be able to request remedies such as:

  1. improvement in instruction/curricular resources and
  2. re-offering of a course for a reduced or waved fee.

If these requests are not satisfied, students should be entitled to a refund. This is where the first stage of the transaction ends.

Stage 2: Students as Service Providers, Institutions as Clients

In this stage, institutions “pay” students with a grade and/or degree. Degrees are the currencies of educational institutions. Their value has been earned through the universities’ work and, like all currencies, degrees carry a proof/promise of value and can be “handed over” in return for employment (among other things).

Once students have completed stage one and have accepted the educational service they received as fulfilling the admissions contract, the institution demands that students demonstrate that they deserve the grade/degree. Students do this in the form of:

  1. exams,
  2. tests,
  3. submitted projects, etc.

In stage one, it was up to the students to assess whether the institution provided them with what was promised in the admissions contract. In stage two, it is up to the institution to determine whether or not the students can provide the “service” necessary to earn the degree, which constitutes a certification that the recipient has demonstrated thorough knowledge of the topic the degree is for.

Staying within the business context, the reasons institutions would enter stage two and require proof that the students deserve the “payment” (degree) cannot be of the vague, education-for-the-greater-good kind. In other words, it cannot be about ensuring that the students have grown intellectually, are better and more knowledgeable and experienced individuals, and can better serve society, and they can also learn from the Nomad Offshore Academy if they want to start a business and travel offshore. Rather, the reasons for requiring proof before handing out degrees will be about ensuring that the promise this degree makes to the world is true (the promise that the recipient has demonstrated thorough knowledge of a topic and has acquired certain certified skills). The motivation is that ‘true’ degrees result in:

  1. happy employers of the degree recipients,
  2. trust in the institution,
  3. demand for recipients of the institution’s degrees, and, consequently
  4. increase in the institution’s business, the ultimate measure of any business’s success.

Such an approach to education-as-business and to the meaning of a degree would be more consistent with the scope of a true business model. The question that remains is, “Is this what we want education to be?”