Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 13
Toward Collaboration and Inclusion:
The Electronic Portfolio and Outcomes Assessment
Anthony F. Chelte
Western New England College
The need to move away from autonomous academic endeavors toward a more inclusive process involving all stakeholders in higher education provides the backdrop for this article. The introduction of the Paradigm of Autonomy and arguments for moving away from this view to one of collaboration and inclusion is framed within the context of educational outcomes and assessment. Research has shown that limited work has been done across institutions in developing “shared” outcomes and assessment across curriculum and “values.” The Electronic Portfolio (utilization of the Internet as a platform for wide distribution and discussion of materials) is introduced as a mechanism to harness information technology for achieving the objectives of developing discourse on achieving acceptance of wide-ranging outcomes and assessment measures. The involvement of all stakeholders in the process, particularly those beyond the academy, is seen as an integral part of building an emerging paradigm of collaboration and inclusion in the development of shared outcomes and assessment metrics. The use of the Electronic Portfolio as the information technology vehicle is described in some detail.
Beginning with the End in Mind
A careful examination of our higher education institutions reveals that faculty are largely rewarded for autonomous efforts. These activities include classroom pedagogy, student evaluation, and scholarly research and publication. This collection of activities, in part, constructs a Paradigm of Autonomy. The recent emphasis on outcomes assessment and the interest of stakeholder (e.g. employers, alumni, legislators, accrediting bodies, etc) involvement in "reforming" higher education have created an urgency to move away from the traditional paradigm of autonomy. This conflict of values can be seen clearly in the movement toward outcome assessment.
This paper argues for a mechanism that may move in the direction of establishing a far-reaching dialogue to develop, refine, and measure educational outcomes that transcend institutional boundaries. At the core of this approach is the introduction of the Electronic Portfolio. It has been defined as “a purposeful collection of work, captured by electronic means that serves as an exhibit of individual efforts, progress, and achievements in one or more areas” (Wiedmer, 1998:586). For our purposes, the discussion that ensues proposes a much wider application of this tool. In brief, the goal is to use the www to publish "outcomes" and other work reflecting skill sets, competency development, and other material that can serve as a basis for judgment. This publication of material will serve as a starting point for a cross-institutional discourse on developing a common set of outcomes and a set of applicable assessment metrics that have wide acceptance. Further, the use of the Internet to facilitate this process is designed to specifically involve the stakeholder groups external to our educational institutions in this broad discussion. It will create a platform from which discussion leading toward synthesis can occur among all stakeholders in the
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 14
Higher Educational process. This approach can serve as a true “peer review” (including other interested parties, e.g. employers, alumni, etc) process as we move toward a developmental model of cross-institutional outcomes and assessment. The longer-term goal is to begin to shatter old and ineffectual paradigms that hold fast in many institutions and to move rapidly toward emerging paradigms of collaboration and inclusion.
"Old and in the Way…"
Traditional paradigms held by many in Higher Education are in the midst of a frontal assault. A paradigm of autonomy that rewards solitary faculty endeavors has come up against an external environment which values broader stakeholder participation in the educational process and collaboration in defining acceptable standards. A significant part of the shift includes the defining and implementation process of outcomes and assessment in higher education. The environment has changed in fundamental ways representing a trend away from traditional values held by a plurality of academicians led by ever-increasing advances in information technology. The conditions are right, sufficient and necessary for a new paradigm to emerge--one that is consistent with values of collaboration and participation.
Paradigms guiding policies at many institutions of higher learning are also under mounting pressure to reform. Institutions have been criticized for fostering programs that are too esoteric in applying outcomes and assessment techniques to student learning. As such, these efforts do not transfer across organizations. The pattern is familiar. Several different programs in several different institutions resulting in several different approaches yielding several different results. Unfortunately, and far too often in the outcomes/assessment arena, this leads to varying standards for similar curriculum across institutions. A search for a new and emerging paradigm that fosters collaborative and cross-institutional thinking and stakeholder participation is long overdue. It is clear that if we fail to take action on the outcome/assessment front, legislative and accrediting bodies are poised to act. This indicates that fundamental changes must occur quickly in the traditional paradigms. To this end, the following discussion will explore:
· The need to confront and create a shift in the paradigm of autonomy;
· The need to develop more cross-institutional curricular standards in part by welcoming wider, more collaborative partnerships with stakeholder groups outside of the academy (academy is used here and throughout as a generic and collective term for academicians responsible for directly or indirectly facilitating the educational process)
· The need to develop a common ground for developing, discussing and evaluating educational outcomes and their assessment; and
· The need to use the Electronic Portfolio as the central means to achieve these objectives.
Through this approach we may develop collaboratively, new and emerging paradigms that view education as a process. And, through this process, we may begin the arduous task of developing and sustaining outcomes that are shared, and which may yield metrics that will lead to assessment that is widely accepted. In many ways, this
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 15
paper represents an opportunity for a new beginning utilizing the Internet and its information technology to facilitate communication to build a platform for serious discussion of outcomes and assessment. The approach taken in this paper may assist our efforts as we seek innovative ways to continuously improve what we do in higher education. Ultimately, the promise for developing new and adaptable paradigms can only result from a shift away from autonomy toward one that values the inclusion and collaboration of all stakeholders in the enterprise of higher education.
It is important, however, to recognize that institutions do have different missions and as such, differing stakeholder groups. The large “research” institution’s purpose may be markedly different than that of the small “teaching” institution. Additionally, the evaluation processes (i.e. course content testing) deployed in the larger institutions will probably differ. In large auditorium style classes, for example, the wider use of objective exams may be necessary while in smaller classes other vehicles may be employed (e.g. essays, case analyses, etc). However, it is a fair argument that the “content” of courses across institutions should be comparable.
A Call for Reform
The Higher Education reform pendulum has swung away from an emphasis on formative evaluation in the 1980's toward the implementation of summative evaluation in the 1990's. Specifically, the focus on evaluation has broadened to a more inclusive indicator of success -- "outcomes" of higher education. Beyond evaluation and outcome assessment the "process" or delivery systems of higher education have come under increasing public scrutiny as well. Legislators, accrediting agencies, and a growing number of interested stakeholders have directed their attention to Higher Education in examining content issues (what is being taught), process issues (how it is being taught), outcome issues (what is being learned) and assessment issues (how it is being measured). Considerable debate, controversy and a growing forum in political arenas have all marked the discussion of these issues. Based on these recent trends, it appears certain that the issues of outcomes and assessment will be with us for a long time to come. In fact, these two concepts are being operationalized as the mechanism by which significant criteria have been developed in evaluating educational programs and processes.
Increasing pressure has been directed at educators to improve content, pedagogy and the outcomes of the educational enterprise. Significant technological advances have helped punctuate the way we think about education delivery systems. In fact, these advances have created new and interesting ways to deliver education to a broader audience through vehicles such as distance learning. Technology has also created the need to re-think our pedagogy as the values and paradigms of entering students have changed dramatically. As the delivery mechanisms change so must our evaluation systems. We must be vigilant to a more careful definition of "outcomes" and we must commit ourselves to a more sophisticated approach in developing "assessment" metrics. This has not been an easy task for many traditionalists in higher education. As will be seen below, traditional paradigms of higher education still prevail. However, substantive changes in technology and in other factors in the external environment demand that a paradigm shift occur away from the traditional view held by many academicians.
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 16
Public attention has been focused on the need to improve the quality of graduates. "Test mania" has begun to take center stage across the education system as summative evaluation gains momentum. One can see this gradual shift-taking place in important areas of educational processes such as the broad introduction of teaching competency exams. Much more alarming than the introduction of standardized professional exams, is the high failure rate among prospective teachers (e.g. Massachusetts). Some have elected to draw the inference from this outcome assessment metric that the educational process leading to teaching certification is in dire need of reform. Some more vocal critics (such as politicians in an election year) have argued that the system is broken. This may or may not be accurate, however. The reliance on a single summative assessment is usually neither sufficient nor appropriate. The debate looms large in this respect. What is clear, however, is an emerging public mandate (through legislative and accrediting efforts) that demands that the traditional paradigm of higher education be carefully scrutinized at worst and a new and emerging paradigm considered at best. It is apparent that many in the public arena feel that that the current system is seriously flawed.
Outcomes and Accountability
In the absence of empirical evidence to the contrary, (and more importantly, consensus on what the evidence should be) one tends to believe what one sees and hears. The criticisms aimed at education in the 1980's often were cast as efforts directed at bringing about improvement. One such focal point of these efforts continues to be at the center stage of attention: "outcomes assessment." However, these same critical efforts seem to be aimed more at achieving accountability in educational systems than developing and implementing any lasting reforms (Sewall, 1996). For example, rather than critically evaluating the paradigm of course delivery systems and educational processes, reliance on standardized tests is championed, sometimes, as the exclusive indicator of flawed processes (consider again, for example, the extraordinarily high failure rate among prospective teachers taking the certification exam in Massachusetts)!
The challenge for academicians and other stakeholders in the educational process is one of management -- managing the outcome and assessment process! This requires a fundamental change in the way we view our roles as educators (and how we view the roles of other stakeholders). We must acknowledge ownership of the process. Only through an effective management approach can we begin to critically examine existing paradigms and move toward developing models that respond to the incredibly fast-paced changes, led by information technology, we are experiencing in our lives. As will be seen, we as educators can harness this technology to help develop ownership and responsibility for emerging paradigms that will satisfy the critics and lead to a broader discussion on learning, outcomes, and assessment.
The critics are not entirely without foundation for their position. Accreditation agencies have mandated outcome assessment as a key process criterion for member colleges and universities. The initial results however have produced reactive tensions within
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 17
and from universities and Colleges, particularly among faculty. Controversy, confusion, and resistance are characteristic of many attempts to systematically implement assessment processes. This general malaise that critics point to has become more focused: "the most damning thing that you can say about American higher education is that there is no way to tell who is educating well and who is not. …[R]ankings provide a variety of indicators of "institutional quality," but nothing that directly measures what is most important: what students learn." (Hoyler, 1998).
The Paradigm of Autonomy and a Paradigm Shift
While having their merits, the critiques of higher education seem to pay far less attention to the numerous organizational and curricular attempts at developing systematic outcomes assessment. Critics have argued that where there is evidence of systematic attempts, they have been largely designed to validate what the educational process is already doing well rather than focusing on identifying areas for improvement. The reaffirmation of the traditional paradigm is enhanced. To further add to these observations, consider that those assessment efforts that have been reported out are those that are focused exclusively within the organization. That is, outcome assessment is pegged specifically to the values, mission, and curricular standards of individual institutions. Albeit, this is a step in a positive direction as it promotes an awareness of the essential areas of outcomes and assessment. However, more attention needs to be placed on outcome assessment across institutional boundaries. That is, a consensus on outcomes that "all" graduates of institutions of higher learning must demonstrate, and ideally, common assessment metrics that can be agreed upon by those who would ultimately evaluate our students within and beyond the institution. This stakeholder group must be expanded beyond faculty to include potential employers. This is particularly true for business students with prospects for bright futures in business organizations. I do not mean to infer that students of other disciplines should be exempt from this scrutiny. The point here is that those students vying for a professional position should have their credentials subject to a review beyond that of faculty.
We are left with a sense that there have been too few attempts that transcend institutional boundaries. Not surprisingly, then, many of the results stemming from institutional efforts at outcomes assessment have been isolated and idiosyncratic and focused more on intra-institutional concerns rather than cross-institutional improvement. The results, understandably, have been fragmented.
In some cases, (perhaps more often than not) assessment efforts have been derailed before they have had a chance to start. This is largely due to philosophical objections to assessment per se. This stems from the difficulty to achieve consensus over "what" should be measured. Significant aspects of this controversy (and more importantly, the ultimate responsibility to address it) rest with faculty. The controversy is marked by pointed discussions over how and whether "qualitative" knowledge can be measured, what content is most important, and a reliance on positions that reinforce the status quo (why do we need to change what has been successful for so many years?) This latter point usually manifests itself in defending current processes emphasizing "testing" and cumulative GPA's. These distractions keep faculty away from confronting the central issues involved with the overall concept of outcome assessment.
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 18
Toward a Practice of What is Preached
It is fair to say that many faculty lack enthusiasm for developing assessment tools based on specifically described outcomes. It is interesting to see that this intransigence occurs in professional schools as well (e.g. Schools of Business). In industry, goal setting and evaluation are critical to the success of the firm. This is true at all levels of the process from strategy through operations. It is somewhat ironic that we "teach" goal setting and decision making yet sometimes hold ourselves to a different set of standards when it comes to our own processes (i.e. outcomes). This is easy to understand. "Like most people these days, faculty members are already over-scheduled, and assessment is simply one more thing to do" (Hoyler, 1998). Further, a prevailing paradigm among faculty suggests that "mandated" reforms such as outcomes assessment infringes on academic freedom. Or that university-wide efforts in this regard represent nothing more than additional committee work leading to no apparent destination (Hoyler, 1998). One can see here thinking that is bounded by traditional paradigms. Would it not be interesting if we attempted what Jack Welch at GE has done, to create a culture that celebrates a boundary less organization? Alas, many of us are caught in a self-fulfilling prophecy of committee assignments -- nothing good or productive will come of it. Participation in College Governance is a required activity for performance review. One can see that many mechanisms throughout our academic institutions serve to reinforce the "values" inherent in our traditional paradigms. In a sense, we allow ourselves to be held captive within boundaries of the organization. It is truly difficult to “think outside of the box.”
A "culture of collaboration" is somewhat absent in colleges and universities as American Higher Education encourages and traditionally rewards, individual autonomy, We design our own courses, do our own teaching, set our own standards, and construct and grade our own exams (Hoyler, 1998). To think "outside this box" does not come easily to many of the traditionalists in academic institutions. The focus is more on what "my courses" consist of, what "my students" are "learning" rather than a genuine concern for and focus on the educational process as a whole. Some institutions have recently recognized this and have articulated values that emphasize "Integrated Liberal and Professional Education" (cf. WNEC, 1998). Typical faculty evaluations reinforce traditional thinking and behavior among faculty. Heavy reliance on student evaluations, for example, suggests that attention to one's own pedagogy, course structure, and delivery mechanisms are most important. Evaluation schemes do not usually assess faculty collaboration on outcome assessment.
If real progress in assessment efforts is to have half a chance of success, a paradigm shift is needed, particularly among faculty. This process, of course, must begin with strong and visionary leadership. Cajoling by accreditors and criticism from politicians has the effect of creating defensiveness rather than reform-minded approaches. Efforts must also commence at departmental levels to begin a dialogue of basic competencies that are discipline specific. However, the vision that this author has transcends the
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 19
departmental boundaries within a given institution. A collaborative dialogue, beyond the micro-focused departmentally based discourse to one that involves as many scholars and practitioners as appropriate in a discipline, is the paradigm vision here. The "outcomes" of this dialogue can result in a set of competencies (outcomes) and their measurement (assessment) that all students within a particular major field of inquiry must meet and demonstrate. This goes beyond institutional specific "values" and "outcomes." It transforms the "autonomy" valued so highly within our academic departments to a value set that suggests that a larger discussion across institutions is necessary to insure the emergence of a ever-more effective paradigms. The paradigm of autonomy must provide room for collaboration among faculty and other stakeholders if real change is to be embraced and ultimately realized.
Role of Assessment
Issues regarding the role that assessment should play are also embroiled in controversy. Whether assessment should be summative or formative is a legitimate issue. Traditionally, examinations and other standard metrics qualify as summative evaluation. While summative indicators add value, formative assessment needs to become a more pervasive objective in higher education. Formative assessment implies issues of accountability. As such, accountability is typically directed toward faculty who have traditionally resisted such efforts. This view is consistent with problems endemic to the paradigm of autonomy suggested earlier.
Perhaps a different vision would be beneficial. As in industry, higher education would do well to be viewed as a "process" that can be continuously improved. One must first understand what that process is and then move to insure that its process is "in control." Part of this control certainly would include outcomes assessment (both formative and summative) as a critical indicator of the health of the system. As one cannot "inspect quality" into a process at the output end, neither can the educational process improve its quality by attempting to "inspect quality in" after the fact as summative assessment emphasizes. Obviously this is both ineffectual and a waste of resources. If viewed as a process, higher education would naturally include critical formative measurement throughout its cycle. Corrective action can only be effective when implemented closest to the place and time the problem occurs. This approach, of course, adds to the accountability of those responsible for the educational process. It requires us to become adept at problem identification, analysis and solution implementation. These same values are those that we attempt to instill in our students. This mandates yet a further change in the paradigm of autonomy toward one that includes responsibility and accountability.
The traditional paradigm reflects many institutional approaches toward the articulation of values. For example, many of our institutions of higher learning proudly proclaim that values of “life long learning” and “critical thinking” are those instilled in their students. However, a summative assessment makes it difficult to discern whether the institution has been successful in inculcating these values in students in a clear and convincing fashion.
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 20
To wait until the end of the cycle is to engage in fundamental management faux paux. This is akin to a performance appraisal process that focuses on performance "problems" well after they have occurred. At this point, it is far too late to take corrective and formative actions. It is a short sighted, but an easier and less time consuming strategy to avoid conflict during the performance period. I would argue that by implementing both summative and formative metrics in a "public way" (through the Electronic Portfolio) higher education can be seen more as a process that is developmental, innovative, and committed to continuous improvement.
A perspective that emphasizes innovation and continuous improvement is consistent with many of the "professed" institutional "values" such as developing "life-long learning," critical thinking", and other on-going skill and value sets. Unfortunately institutions (or programs or curriculum), which espouse these values, rely far too often on summative evaluation. It calls for us in the academy to adhere to the same principles that we "profess" to develop in our students. In my view, relying primarily upon summative assessment misses the mark.
However pervasive the reliance on summative evaluations, we would be wise to increase our awareness and sensitivity to those standards that are emanating from America 2000, the federal government’s overview of educational reform. In fact, our professional practice will most likely be impacted by the gradual introduction of instruments designed to measure what we are accomplishing as educators. We would be wise to work collaboratively as faculty to do unfamiliar things like setting common goals and standards, devising methods of assessment, interpreting the results, and "using them to improve and coordinate our teaching." (Cramer, 1994; Doyle, 1991; Edwards & Brannen, 1990; Hoyler, 1998; Sewall, 1996).
A compelling argument for collaborative formation of standards and evaluation can be found in Sewall (1996) and Hoyler's (1998) writings. Citing practices in other systems (China, Great Britain, Japan, etc), it is shown that educational systems in these societies have tried to ensure that "exams represent, if not national standards, at least something more than those of the individual instructor" (Holyer, 1998). Put another way, judgment of results is extended to a larger community of evaluators than a single instructor alone. This perspective suggests that traditional and autonomous approaches in higher education may manifest an inherent conflict of interest between our roles as teacher and evaluators (akin to the coach as referee).
In a sense, I think that we are cognizant of this dilemma on another level. Consider that we have developed both summative and formative norms and metrics for advancement through the academic ranks. We have Peer Review of our activities for these purposes. We have peer review of our scholarly work to critically evaluate its "value" for dissemination to a larger audience through publication or presentation at professional meetings. Here we have assumed that we have in place standards of evaluation that are shared and valued. That work is expected to be submitted to a larger audience for its review is generally not disputed among academicians. Is this not outcomes assessment in practice? If we have been able to achieve some success in our own professional practices, why is it that we have not applied the same principles and methodology to the creation of a set of collaborative standards for our students' work?
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 21
Developing an Approach
Outcome assessment as a process demands collaboration and necessitates the continuous review and improvement of metrics that have both validity and reliability beyond standardized examinations. There are larger stakeholder groups beyond the academy to which we must be responsive. This is particularly true in Schools of Business where one of our chief aims is to prepare students for high-level contributions to our business organizations. To achieve these ends requires processes that have not yet been integrated in our academic institutions.
With all the controversy, discussion, and programmatic efforts underway in many academic environments, it is difficult to identify any universal standards for outcomes assessment. The literature has become voluminous and attempts at synthesizing the various approaches to assessment and outcomes has not resulted in a set of unifying standards (cf. Cramer, 1994; Davies, 1994; Doyle, 1991; Edwards & Brannen, 1990; Carcia, 1986; Goodlad, 1990; Gray, 1989; Hutching & Marchese, 1990; Halpern, 1987; Jennings, 1989; Kean, 1987; Lenn, 1989; Mentkowski et al., 1991; Nedweck & Neal, 1993; Phipps & Romesburg, 1988; Ratcliff, 1992; Shepard, 1991; Spangehl, 1987; Tenenzini, 1989; Wise).
One specific approach that holds much promise is the student portfolio. The student portfolio appears to have captured significant attention as a way in which we can demonstrate "outcomes." Much of the application of student portfolios found in elementary and secondary education is in response to real or anticipated standards imposed by legislative or accrediting bodies. The focus for many is on student self-evaluation through the use of the portfolio. The value lies in the way it allows students to gain insights on their own performance and improve their own communication skills (Sewall, 1996). Thus, the use of the portfolio has both summative and formative potential. The National Education Association has also endorsed portfolios as an alternative or complement to traditional progress assessments such as exams. The NEA cautions, however, that some educators fear that the aggregate results given by the instrument may actually be inaccurate and that parents prefer traditional assessment (NEA today, 1996).
Portfolios as assessment tools, as they are currently applied, appear to have the same inherent weaknesses as do the various approaches to outcomes assessment itself. There appears to be a lack of collaborative standards by which the content is determined and how that content is viewed. The paradigms of higher education (e.g. autonomy) appear to act as barriers to developing common, broad-consensus based metrics that transcend institutional boundaries. This requires a dialogue that goes beyond the silo mentality often found in educational institutions grounded in departments. Interestingly, our business organizations are moving in the direction of cross-functional, team-based processes to overcome the weaknesses associated with departmentalization. We, however, seem to continue to rely on traditional paradigms of organizational design and structure in the academy. What is needed is to engage in a dialogue which leads to larger agreement as to what outcomes should be valued for specific curriculum (e.g. business education) and of course, for college-wide outcomes (e.g. critical thinking, life-long learning, etc) that transcend institution-specific boundaries. A vehicle, which holds promise to move us toward this dialogue is the Electronic Portfolio.
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 22
The Electronic Portfolio
Access to the Information Super Highway is easy. Many colleges and universities have wired their classrooms, offices, campuses, and residential facilities with high-speed Internet access. Specific innovations include distance learning programs, College and University web sites, course syllabi, notes, and resource information accessible to all students, and a plethora of other potentially educational resources (e.g. direct access to university libraries, electronic periodicals, texts, etc). Most of our business organizations are wired for the Internet. This common access to the technology needs to be exploited for collaborative and inclusive purposes.
The Internet can be harnessed for innovations in outcome assessment in a meaningful way. The electronic portfolio (also referred to as cyber portfolio and digital portfolio) may offer a starting point. Up to now, the use of the digital portfolio has been summative and used for specific purposes such as electronic resumes. I believe the power of the electronic portfolio can transcend traditional paradigms held by many academicians. Specifically, a commitment to the electronic student portfolio holds much promise. Once accepted in principle, a dialogue, which leads to agreement on standards or outcomes across a wider group of stakeholders in the educational process, becomes possible. By offering participation across institutional boundaries, institutions can get beyond their concern about their outcomes tied to their mission statements, goals and objectives. This is important of course. Alas, it is time to establish a dialogue across stakeholder groups both within and outside of the educational institution.
A suitable place to start, I believe, is a discussion centered on defining a common set of outcomes that are essential to specific curricular areas such as Management, Business Strategies, Organizational Behavior, Operations Management, etc). This discussion can occur naturally over the Internet through a chat room designed for this purpose. Simultaneously, we can share, discuss, define and refine metrics that will qualify as our assessment in these areas. This approach, of course, requires input from a wider group of stakeholders than normally would be entertained in the educational arena. Key stakeholder groups that include academicians, practitioners and other interested parties need to be involved.
How can we establish a vehicle that would allow for simultaneous discourse across these divergent groups? The answer lies, I believe, in the electronic portfolio. Portfolio approaches can be both summative and formative. The use of electronic portfolios is gaining popularity as educators and businesspeople alike are "discovering their benefits as a means of validating individual performance (Wiedmer, 1998). While holding much promise, there is scant literature on the application and development of the electronic portfolio. In fact, most of the literature that exists focuses more on the technology of the approach rather than the content and its implications. To put this in perspective, an extensive search of the literature produced only seven relevant articles
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 23
(Oros, 1998; Naguidula, 1997; Piscopo, 1997; Rubin, 1997; Computer Artist, 1995; Milone, Jr., 1995). The use of the digital or cyber portfolio as an ancillary to the resume has received little attention as well (cf. Herbert and Schultz, 1996)
Not a single article could be found in the standard business periodicals between the years 1994 and 1998 on the use of the electronic portfolio in outcome assessment. This is a surprising finding given the potential for such a technologically appropriate tool and the benefits of integrating the Internet in course delivery systems.
Access to the Internet has become almost universal. This is especially true of most College campuses. Communications between faculty in the same discipline both within and across Colleges has been greatly facilitated by the Internet and Email. The mechanism for a substantive dialogue on the use of electronic portfolios as assessment tools is readily available. Surfing the various web pages of numerous Colleges and Universities reveals that individual faculty are utilizing the Internet as a vehicle to provide students with information, syllabi, resources, etc. It is fair to say that this is a fairly widespread practice. Further, there are examples of student work being posted on the Internet for others to view. Students themselves as part of Colleges clubs and organizations have featured web pages as well. All of this demonstrates the potential for utilizing the Internet as a vehicle to move toward a collaborative set of "outcomes" and perhaps templates to demonstrate the "assessment" of these outcomes. In short, the tools are there. Absent is a framework that would focus the process toward the goals of developing consensus on outcomes assessment.
Consider the possibilities. A national (or global) discussion on a discipline-specific basis as to what constitutes the core values (and outcomes) of majoring in a specific area can be implemented almost immediately. The goal of developing a common set of "outcomes" and "assessment" metrics can be universally communicated via the WWW. Scholars and practitioners can begin a dialogue that transcends traditional boundaries leading us toward a collaborative and inclusive paradigm. Academicians and practitioners alike can undertake a discussion on which types of material would demonstrate acquisition of critical conceptual knowledge. These standards could then be applied to various college programs that may form the core of emerging professional standards of accountability. Further, the work of the student would be "published" and open to a collegial review. This review, in my opinion, is the essence of outcome assessment. This approach is significant too as it allows for a wider audience to view the outcomes of particular college programs across institutional lines.
This opens the process of education to public review. As such, it addresses several of the issues raised by the critics of higher education. To open our processes to a larger audience is to begin to overcome the boundaries of traditional paradigms and to allow for a meaningful dialogue based on empirical standards that use a common base of information. To put it another way, we say that we are "educating" students; the critics have charged that we have not done an effective job. The use of the Electronic Portfolio serves as a vehicle to review and discuss the skills, competencies, values, and "outcomes" that are assumed to be acquired during the educational process. Employers rely on our educational institutions to provide the necessary preparation for productive and successful careers as professionals and as citizens. These issues can be directly assessed, discussed and metrics developed and refined through a common set of information as developed through the Electronic Portfolio.
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 24
I have argued earlier that academicians practice "peer review" of their teaching and scholarship and rely on that very process for promotion, tenure and publication decisions. While we may argue as to the rigors and consistency of these standards, the practice itself is widely accepted and valued. Taking this as our cue, we are obligated to provide the broader public with visible and measurable documentation of our students' achievements of the common set of outcomes that will emerge through a more focused discussion. The electronic portfolio is an important vehicle that can facilitate the attainment of this goal.
What should or should not become part of these portfolios is subject to a wider discussion. The intent here is to begin the dialogue of establishing processes that transcend local and institutional borders. The desire is to involve as wide as possible audience of interested stakeholders who can directly participate in the dialogue. We have the opportunity and the responsibility, to become proactive in establishing the dialogue that leads to broader consensus on outcomes. We have the technology and the means to share our views on outcomes and assessment in a way that matters and in a way that allows for larger participation in the educational process. High visibility that allows for careful scrutiny is something we must strive for. The traditional paradigm of academia and the role of academicians no longer serves us well. We ourselves have the responsibility (and yes, assume accountability) for establishing a dialogue that allows for an open and honest exchange of ideas that will lead us to the development of meaningful outcomes. Outcomes that are necessary for professional success and to be a viable and contributing member of the larger society fulfill this responsibility.
Ultimately, we as academicians have an ethical responsibility to develop a larger and more meaningful paradigm that welcomes broader participation from our stakeholders in higher education. We ourselves must be accountable and responsible for what our students gain from the educational process. The Electronic Portfolio allows us the opportunity to take the first steps toward understanding our process, continually improving upon it, developing and refining definition of outcomes and demonstrating the continuous commitment to innovation and refinement of our assessments.
Anonymous. (1995). Behind the Art. Computer Artist, 4(6), 54-58.
Carcia, P.A. (1986). The Impact of National Testing on Ethnic Minorities: With Proposed Solutions. Journal of Negro Education, 55 (3), 347-357.
Cramer, S.F. (1994). Assessing Effectiveness in the Collaborative Classroom. New Directions for Teaching and Learning, 59, 69-81.
Davies, D. (1994). Partnerships for Reform. Education Week, XIV (6), 12.
Doyle, D.P. (1991). American 2000. Kappan, 73(3), 184-191.
Edwards, D.E. & Brannen, D.E. (1990). Current Status of Outcomes Assessment at the MBA Level. Journal of Education for Business, February, 206-212.
Goodlad, J.I. (1990). Teachers For Our Nation’s Schools. San Francisco: Jossey-Bass Publishers, Inc.
Ó the Journal of Behavioral and Applied Management – Summer/Fall 2001 – Vol. 3(1) Page 25
Gray, P.J. (editor) (1989). Achieving Assessment Goals Using Evaluation Techniques. San Francisco: Jossey-Bass Publishers, Inc.
Halpern, D.F. (editor). (1987). Student Outcomes Assessment: What Institutions Stand to Gain. San Francisco: Jossey-Bass Publishers, Inc.
Hoyler, R. (1998). The Road Not Taken. Change, 30 (5), 40-43.
Hutching, P. & Marchese, T. (1990). Watching Assessment: Questions, Stones and Prospects. Change, September/October, 12-38.
Jennings, E.T. (1989). Accountability, Program Quality, Outcome Assessment, and Graduate Education for Public Affairs and Administration. Public Administration Review, September/October, 438-445.
Kean, T.H. (1987). Time to Deliver. Change, September/October, 10-11.
Lenn, M.P. (1989). Accreditation and Planning in the Assessment Movement. Educational Record, Spring, 48-49.
Mentkowski, M. Astin, A.W., Ewell, P.T., & Moran, E.T. (1991). Catching Theory Up With Practice: Conceptual Frameworks for Assessment. American Association for Higher Education.
Milone, Jr. M. (1995). Electronic Portfolios: Who’s Doing Them and How? Technology & Learning, 16 (2), 28-33.
NEA Today (1996). The Latest on Student Portfolios. November (15), 3-17.
Nedweck, B., & Neal, J.E. (1993). Performance Indicators and Rational Management Tools. ERIC document AN 360917.
Niguidula, D. (1993). The Digital Portfolio: A Richer Picture of Student Performance. Studies on Exhibitions, 13. Providence, R.I.: Coalition of
Essential Schools, Brown University.
Oros, L. (1998). Creating Digital Portfolios. Media & Methods, January/February (34), 3-15.
Phipps, P.A. & Romesburg, K.D. (1988). An Assessment By Alumni of College and University Experience. College and University, 284-295.
Piscopo, M. (1997). Presenting Digital Portfolios. Communication Arts, 9,(5), 38-40.
Ratcliff, J.L. (editor) (1992). Assessment and Curriculum Reform. San Francisco: Jossey-Bass Publishers, Inc.
Rubin, B. (1997). The ‘New’ DBAE. School Arts, 96 (8), 43-46.
Sewall, A.M. (1996). From the Importance of Education in the 80’s to Accountability in the 90’s. Education, 116 (3), 325-333.
Shepard, L.A. (1991). Will National Tests Improve Student Learning? Kappan, 73 (3), 232-238.
Spangehl, S.D. (1987). The Push to Assess: Why It’s Feared and How to Respond. Change, 9, January/February, 35-39.
Tenenzini, P.T., (1989). Assessment With Open Eyes. Journal of Higher Education 60, 644-664.
Wiedmer, Terry L. (1998). Digital Portfolios: Capturing and Demonstrating Skills and Levels of Performance. Phi Delta Kappan, 79 (8), 586.
Wise, A.E. & J. Leibbrand (1996). Profession-based accreditation: a foundation for high-quality teaching. Phi Delta Kappan, 78 (3), 202-207.