Competitive Event Process FAQs
1. What are the criteria for DECA’s Certificate of Excellence?
ICDC competitors scoring a combined total score of 70% or better on the competitive event components from the preliminary competition will earn a digital Certificate of Excellence. This is based on the judge’s raw score.
For example, a participant in an Individual Series Event who scored a 75 on role-play one, a 69 on role-play two and a 75 on the exam meet the benchmark since they scored a total of 219 points, and 210 is the minimum. These are based on the participants’ raw score from the judge – not the normalized score on the transcripts.
Learn how to download certificates here.
2. Do we ensure judging is fair?
Judging is subjective, and some judges have different standards of evaluation. Because of this, we ensure that for the preliminary competition, each participant/team is only competing against the other participants/teams with the same judge. For the final competition, all the finalists for each competitive event present to the same judge, and only the exam score (if applicable) carries forward – all other judged scores stay in the preliminary competition.
3. How many participants advance to finals from a section?
A general rule of thumb is that the top two participants/teams in each section advance to finals. There may be other factors, such as a tie or penalty points resulting in a tie. We prefer not to break ties at the preliminary level of competition and advance them to finals as long as the schedule allows for it.
4. How is ICDC different from state/Provincial and local competition?
Just like the competition level changes from district to association conferences, the competition level changes when all associations converge at ICDC. Similar to district and association competition, judging is subjective. Therefore, rankings at the association level won’t necessarily hold true at ICDC, because those scores at the association conference were likely based on the judge at that event. It’s very similar to a track meet – one athlete may do well at one meet and then another may outperform him/her at the next track meet depending on a variety of conditions both internally and externally, such as weather or climate.
5. How is a competitive event scored?
This year, all written entries with two rubrics were scored by two different judges during preliminaries to help balance the overall subjectiveness. For all events with an exam, that exam score is up to the student, and that helps alleviate subjectiveness too. Additionally, penalty pointing was conducted by DECA Staff to provide consistency and fairness.
6. How are “0” scores verified?
Our tabulation system has a reporting feature that flags scores (each component) with a "0" so our team can ensure the score is in fact a "0" (as in a no-show).
7. How do the judges know and understand the process?
This year, we provided the judges with more training than ever with a training video. In addition, the judges are briefed by each event director. We always ask the judges to remain consistent in their role throughout the day and to not make any comments after the presentation is over.
Competitor Transcripts FAQs
After the scores are tabulated, finalists are identified and overall winners are determined using the above processes, the data from the competitive events is used to create competitive events transcripts. Learn how to download transcripts here.
1. What is a transcript?
Supporting college and career readiness, DECA’s competitive events transcripts express each member’s performance rated against National Curriculum Standards identified as essential knowledge and skills and 21st Century Skills.
DECA transcripts provide evidence of student learning and achievement through their participation in DECA’s Competitive Events Program at the International Career Development Conference. DECA transcripts are an exceptional tool to provide feedback for both advisors and members in helping them assess areas for improvement in course instruction and student learning. The transcripts provide ranking and performance measures for each instructional area on the career cluster exam and feedback for each performance indicator and/ or evaluation criterion that the judge evaluates.
Because of the structure of DECA’s competitive events program, all data is normalized to provide this feedback and should be viewed as a snapshot of performance rather than in terms of final competition rankings. These sample transcripts provide information on how to read the data provided.
2. What is a normalized score?
Normalization is a statistical process allowing for the comparison of different datasets (i.e., sections) in a way that eliminates the effects of certain influences (i.e., different judges). Scores are reported as if participants were all evaluated by the same judge, when in fact they were not.
The normalized scores are meant to show performance as it relates to the judge’s scoring among all participants with normalized scores. Because each event has nearly 200 participants/teams and usually 9-10 judges, the normalization occurs; keep in mind a top score in one section could have been an 85, while the top score in another section could have been a 100.
3. What does the transcript data mean?
The data on the transcript is not the participant’s raw score. The data provides a baseline to see how the participant performed on a performance indicator in comparison to another participant in the same competitive event. For example, you can identify areas on the exam where the participant excelled, or you can see how the participant performed according to the judge on a specific performance indicator or criterion. Transcripts provide the ability to diagnose strengths and weaknesses and can be used to improve future performance in DECA’s Competitive Events Program.
4. What are the evaluation criteria used for the competitive event components?
- A maximum score of “EXCEEDS EXPECTATIONS” in any category means that, in the judge’s opinion, the information is presented effectively and creatively; nothing more could be expected of a participant.
- A “MEETS EXPECTATIONS” rating means that the information is presented well. Though there may be a few minor problems or omissions, they are not significant. Creativity, however, is not shown to any great degree.
- A “BELOW EXPECTATIONS” score means that the information presented does not meet minimum standards of acceptability.
- A “LITTLE/NO VALUE” score means either that some major flaw has been noted that damages the effectiveness of the information (this may be a major omission, a serious misstatement, poor communications or any other major flaw) or that the information presented is of no value (does not address the problem at all).