Self-Confidence of Undergraduate Students in Designing Software Architecture
11 Self-Confidence of Undergraduate Students inDesigning Software Architecture
Lotfi ben Othmane ∗ IEEE Senior Member , Ameerah-Muhsina Jamil ∗∗ Iowa State University, Ames, IA USA
Abstract —Contributions: This paper investigates therelations between undergraduate software architecturestudents’ self-confidence and their course expectations,cognitive levels, preferred learning methods, and crit-ical thinking. Background: these students, often, lackself-confidence in their ability to use their knowledgeto design software architectures. Intended Outcomes:Self-confidence is expected to be related to the stu-dents’ course expectations, cognitive levels, preferredlearning methods, and critical thinking. ApplicationDesign: We developed a questionnaire with open-endedquestions to assess the self-confidence levels and re-lated factors, which was taken by one-hundred tenstudents in two semesters. The students answers werecoded and analyzed afterward. Findings: We found thatself-confidence is weakly associated with the students’course expectations and critical thinking and indepen-dent from their cognitive levels and preferred learn-ing methods. The results suggest that to improve theself-confidence of the students, the instructors shouldensure that the students’ have "correct" course expec-tations and work on improving the students’ criticalthinking capabilities.
I. Introduction
Undergraduate students are expected to step directlyinto software developer positions and succeed. Typicalundergraduate students are, however, not prepared for theambiguity of the industry [1]. The lack of self-confidencemakes them resistant to take opportunities and leadprojects, and their capabilities are sometimes below theexpectations of the employers [2].
Self-confidence , aka self-efficacy, perceived ability, and perceived competence, is ameasure of one’s belief in their ability to successfully exe-cute a specific activity [3], [4], [5]. According to Bandura,the outcomes that people expect depend heavily on theirself-confidence that they can perform the skill [5].Self-confidence was considered a critical factor thatimpacts undergraduate students’ abilities in program-ming [6], [7]. For instance, Heggen and Meyers [2] studiedstudents’ confidence before joining a program to developreal-word applications. They found that only 25% of thestudents were optimistic about their abilities in develop-ing software systems before joining a pair-programmingprogram and are far more confident in their leadershipabilities after finishing the program. Hanks also mea-sured their students’ confidence after practicing with pair-programming and found that the confident students likedpair-programming the most, while the least confident stu-dents liked it the least [6]. Software architects gain cumulative architectural knowl-edge through experience; they make architectural decisionsin ambiguous situations and learn by assessing the im-pacts of these decisions on software [8], [9], [10]. Teach-ing software architecture is challenging given the natureof software architecture and the characteristics of thelearners [11]. For instance, software architecture (1) is afuzzy concept, challenging to present as a tangible anduseful concept to non-experienced software engineers whilethe learners are used to topics where the problems andsolutions could be precisely defined, which do not applyto the case of architecture.Software architecture students, like programming stu-dents, have, often, a self-confidence problem. For example,some of our students expressed, in Spring 2017, that theyare not able to use their knowledge to design softwarearchitectures. The problem of self-confidence of softwarearchitecture students has been addressed, in our opinion,by focusing on practicing with design patterns [12] oradopting the clinical mode [13].We conducted informal meetings with colleagues toassess the factors that may impact the self-confidencelevels of software architecture students. The goal was toidentify the basic aspects that we could act on to improvethe students’ confidence levels. The consultation led tothe selection of variables: course expectations, cognitivelevels, preferred learning methods (e.g., passive, active),and critical thinking.We developed a questionnaire with open-ended ques-tions to study the relationships between students’ self-confidence and their expectations, cognitive levels, pre-ferred learning methods, and critical thinking. We gave thequestionnaire to the students who took the course in twosubsequent semesters: Fall 2017 and Spring 2018. In total,110 students out of 138 students took the survey. We codedthe answers of each student using the descriptive codingmethod [14], and used the frequency technique as in-text analytics [15], [16] to assess the dependency betweenthe students’ self-confidence levels and their expectations,cognitive levels, preferred learning methods, and criticalthinking.The paper is organized as follows. Section II discussesrelated work. Section III describes the course design. Sec-tion IV describes the research method. Section V exploresthe collected data. Section VI analyses the relationshipsbetween self-confidence and expectations, cognitive levels,preferred learning methods, and critical thinking. Sec- a r X i v : . [ c s . S E ] F e b ion VII discusses the impacts and limitations of the studyand Section VIII concludes the paper. II. Related Work
This section reports about existing work on exploringways to teach software architecture.Valentim et al. [17] performed a study with 17 post-graduate students on student perceptions of applying de-sign thinking to design mobile applications. The studentsappreciated the process as they find it useful. However,they find it challenging to apply because they need tothink creatively and generate ideas. Besides, they foundthe application of the techniques (e.g., workshops andbrainstorming) useful but challenging given the lack ofteam connection and critical thinking [18].Heesch and Avgeriou [19] surveyed 22 undergraduatesoftware engineering students in the Netherlands, aimingto find out the natural reasoning process during archi-tecting. They found that most of the students tried tounderstand and consider the architectural drivers andemphasize the quality attribute requirements. However,many students did not identify the most challengingrequirements nor prioritize them. In addition, most ofthe students affirmed that they used the requirements toidentify design options and preferred well-known solutionsrather than unknown alternatives. They also found thatwhile more than half of the students affirmed that theyconsidered the pros and cons of alternative solutions, manydid not consciously make trade-offs between requirements.Schriek et al. [20] propose a card game to help novicedesigners design reasoning. The cards represent the rea-soning techniques: problem structuring, option generation,constraint analysis, risk analysis, trade-off analysis, andassumption analysis. The authors evaluated their tech-nique’s efficacy using twelve groups of students who tookthe software architecture course. The study showed thatthe cards trigger reasoning and lead to more discussionand reconsideration of previous decisions. The groups whoused the card game identify more distinct design elementsand spend more time reasoning with the design.Rupakheti and Chenoweth experimented with teach-ing undergraduate students software architecture for adecade [12]. They found that teaching the topic is chal-lenging because it contrasts the students’ habits in theother computer science courses. For instance, softwarearchitecture requires addressing problems in large andcomplex software, use multiple complex solutions, andis designed from incomplete information. The authorsdescribed how they evolved the course from lecture-heavyto a hands-on course that teaches the students how to usearchitecture patterns to address Quality Attributes (QAs)in lab experiments. The authors found that the use of labsreinforced the students learning.Ali and Solis [21] studied the perception of masterstudents on the easiness of use, usefulness, and willingness Design reasoning means using logic and rational thinking to makedecisions. to use the Attribute-Driven Design (ADD) method in thefuture. They found that the students find the architecturedesign method useful but not easy to use and are neutralin term of willingness to use the ADD.Ben Othmane and Lamm [22] studied the factors asso-ciated with the mindsets of software architecture students.They found that students’ mindset weakly correlates withtheir cognitive levels and is related to their expectations.They also found that the students who prefer practicingsoftware architecture have more open mindsets than thosewho prefer quizzes.We did not find studies on the self-confidence of under-graduate students to design software architecture–recallthat the issue has been investigated for programmingstudents [6], [7]. We initiate the discussion about measur-ing the students’ self-confidence and assessing the factorsthat may impact it. Recall that this trait is essential forstudents to take the initiative and lead projects.
III. Course Description
The course Software Architecture Design is anundergraduate-level course for software engineering andcomputer engineering programs. Before taking the class,the students take a class on developing web applications.The course is given two times a year. Each semester, theclass meets two times a week for 14 weeks, each of 75 min.The goal of the course is to train the students in design-ing software architecture. The course uses the Attribute-Driven Design (ADD) method [23]. The students acquirethe knowledge needed to design software architecture andlearn how to apply the ADD method, which is a process-based approach to the design of software architecture [24],[23]. The objectives are:1) understand and explain the importance of softwarearchitecture,2) understand the relationships between software qual-ity attributes and software architecture,3) Gain ability to elicit software architecture drivers,4) Understand the roles of a set of architecture styles,patterns, and tactics in software architecture,5) Apply the attribute-driven method to design andevaluate software architecture.The students work in groups on in-class activities. Theactivities include answering questions that need reflection,working on exercises, and simulating architecture meet-ings. The case studies provided by [23] were useful for thestudents to see the use of the techniques.The students were requested to practice the knowledgethat they acquire in the lecture sessions on group andindividual assignments. The students work in groups onprojects in three group assignments: gathering architec-tural drivers, designing the architecture of the new versionof a given software and implementing the architecturethey designed. The individual assignments enforce theexperience that the students obtained from the project.The group assignments are related to an Internet of Things(IoT) project, while the individual assignments are related2
ABLE I
Questionnaire.
ID Factor Question1 Expectation What was your expectation of the coursebefore taking it?2 Cognitivelevel Assume you are given a project and askedto design an architecture for it. How wouldyou do the design?3 Self-Confidence How much confidence would you haveabout your design?4 Criticalthinking What are the differences between designingthe architecture of a Web application andthe one of an IoT system?5 Preferredlearningmethod What is/are the method(s) that helped youbetter learn software architecture? to IT projects. This is expected to give the students anexperience with the two domains.
IV. Research Method
The best solution to assess the relationships betweenstudent’s self-confidence level and expected dependentvariables (course expectations, cognitive levels, preferredlearning methods, and critical thinking) is to specify a setof closed questions (e.g., using Likert scale and variablecategories) and use inference statistics techniques. Since,we do not know the different categories for each of thedependent variables, we conducted a qualitative study.The study uses students’ free-text responses to a question-naire as the data source. We discuss the preparation of thestudy, the data collection, and the data analysis activities.
Preparation of the study.
We discussed the coursewith colleagues and identified a set of factors that weexpected to be associated with students’ self-confidence,which are: (1) course expectations, (2) cognitive levelsby the students, (3) preferred learning methods and (4)critical thinking. Therefore, we used expert opinions ratherthan literature review to identify the factors that mayimpact the students’ self-confidence in designing softwarearchitecture. The factors were used to develop a set ofquestions to measure them, listed in Table I.We developed an anonymous, electronic questionnaireusing Google Form and made it available online for thestudents in November 2017 (for Fall 2017 semester) andApril 2018 (for Spring 2018 semester). (The studentsanswer the questionnaire at the end of the semester.) Thesubmissions were anonymous, but the students had to tellthe instructor that they participated in the study to gettheir bonus points. Data collection.
One-hundred ten students answered thequestionnaire in Fall 2017 and Spring 2018. We used thethematic analysis [14] method to extract insights fromquestionnaire responses. The thematic analysis approachis a method for identifying, analyzing, and reporting pat-terns (themes) within data [25]. It allows exploring phe-nomena through interviews, stories, or observations [26]. The project was granted an IRB exemption.
First, we read all the answers to the questions and ex-tracted the thematic code representing each of the an-swers. A code is a word or short phrase identifying theessence of a portion of language-based or visual data [27].At the end of this step, we assigned codes to each of theone-hundred-ten students’ responses and obtained a set ofcategories for each of the factors of Table I.The cognitive levels of the students according to Bloomtaxonomy [28] are commonly assessed either using testquestions or reflection write-ups [29]. We used in this studythe verbalization used by the students in their responsesto (reflection) Question 2 to identify the cognitive level ofeach student. The association of the verbs to the differentlevels is based on the author’s domain knowledge. Forinstance, Participant (P20) said "The design would varydepending on what the project requirements and architec-tural drivers were. Once I decided on an optimal referencearchitecture, I would go through the iteration design processand make sure that appropriate design decisions were madeto address every architectural driver that was identifiedin the project description." The codes extracted from thestatements are: apply the design process, select referencearchitecture, and evaluate. Since the code "evaluate" isclassified in the cognitive levels as
Evaluation , we rankedthe student at level
Evaluation –that is, the code associ-ated with the higher cognitive level is selected.Next, we counted the frequencies of the different codes/-categories/levels used in the responses to each of thequestions of Table I and observed the patterns in thesedata. We discuss the data that we collected in Section V.
Data analysis.
We represented the relationships betweenthe students’ self-confidence levels and each factor af-fecting their self-confidence using matrices–we use onematrix for each factor. The columns of a matrix are theself-confidence levels and the rows are the codes/code-categories of the factor being studied. The elements arethe frequencies of the students who belong to the givenfactor category and given self-confidence level. We use Chi-square independence test [30] to evaluate the dependenciesbetween self-confidence and the related factors.
V. Data Collection
This section summarizes the responses of the studentsto the questionnaire and discusses the results.
A. Self-confidence
This subsection discusses the results of the analysisof the responses to the question:
How much confidencewould you have about your design?
We classified theextracted codes into five categories: high, moderate, fair,and no self-confidence, in addition to no definite answercategory. Table II shows the codes that we used for eachlevel, and Figure 1 shows the frequency of these levels.The number of students who have high self-confidence See for example: https://adp.uni.edu/documents/ bloomverb-scognitiveaffectivepsychomotor.pdf ABLE II
Codes used to express self-confidence of the students intheir architecture designs.
ID Confidencelevel Codes1 Confident very confident, confident, pretty confident2 Moderate somewhat confident, moderate, decent, some-what confident, relative, quite confident3 Fair fair confidence, not very/extreme-ly/overlyconfident4 Noconfidence not confident, not great5 No definiteanswer no definite answer, no answer, not sureFig. 1. Frequency of self-confidence levels. level is 34 (31%). These students seems to be comfortableapplying design processes, such as ADD. For instance,student (P24) expressed that by saying that "I feel Iwould be very confident in my design because I think thedesign process does a good job of ensuring the architectureconsiders and satisfies all the drivers. So as long as I amsuccessful in compiling a thorough list of drivers, I think thedesign will turn out well." . Out of the remaining students,we see that eight students (about 7%) did not providedefinitive answers. Students who elaborated their answersexpressed the need for references to ensure the efficacy oftheir designs.
B. Student’ expectations about the course
This subsection discusses the analysis results of theanswers to the question:
What was your expectationof the course before taking it?
Figure 2 shows thefrequency of the students’ expectations about the course.In general, most of the students expected the course tobe about the design of architecture (32.7%), architecturestyles (20.9%), and design process (14.5%)–note that somestudents specified more than one course expectation cate-gory. We observe that eight students related the courseto other courses and two students relate the course toexperiences they had in their internships. We also observethat the number of students who did not have a clearexpectation about the course is 38 (34.5%). The reason
Fig. 2. Frequency of expectations.TABLE III
Cognitive levels of the students.
ID Level Codes1 Creating adjust design process2 Evaluating identify trade-offs, identify risk, architectureevaluation3 Analyzing analysis4 Applying identify architecture drivers, get requirements,meet stakeholders, create design, apply the de-sign process, do as in assignments, modify ref-erence architecture5 Understandingselect reference architecture, select architecturestyle, select architecture type, make diagrams6 Remember-ing7 Irrelevant for this high percentage is possibly due to the fact thatthe course is required for their programs.
C. Cognitive levels
This subsection reports the results of the analysis ofthe replies to the question:
Assume you are given aproject and asked to design an architecture forit. How would you do the design?
We coded theresponses of the students. We classified the extractedcodes based on the new Bloom taxonomy cognitionlevels [28]. Table III shows the classification of the codesto Boom’s cognition categories, and Figure 3 provides thefrequency of the cognitive levels.We observe that most of the students have "apply-ing" and "understanding" cognitive levels. The numberof students who provided irrelevant answers is sixteen(14.5%). Many of these students specified that they needmore details to decide how to proceed with the designor provided non-useful answers such as "I would probablytry and layout the entire system’s architecture in one gobecause the process of iterations confused me." (P11) . Inaddition, we note that eight students had the "evaluating"4 ig. 3. Frequency of the cognitive levels.Fig. 4. Frequency of learning methods. cognitive level, one student had the "creating" level, andone student had the "remembering" level.
D. Preferred learning methods
This subsection discusses the results of the analysisof the answers to the question:
What is/are themethod(s) that helped you better learn softwarearchitecture?
Figure 4 shows the frequency of preferredlearning methods–a student can specify a set of methods.We observe that the number of students who preferpractice is 27 (24.5%) and the number of students whoprefer reading is 18 (16.3%). We cannot distinguish stu-dents who prefer active learning methods from studentswho prefer passive learning methods because each studentcan specify both active and passive learning methods, e.g.,reading and practice. The figure shows, however, that theactive learning methods are specified by the students morefrequently than the passive learning methods.
E. Critical thinking
We assess students critical thinking by evaluatingtheir abilities to identify the differences between the
TABLE IV
Codes associated with the critical thinking aspects.
ID Aspect Codes1 Architecturedrivers reliability, interoperability, scalability, architec-ture drivers, availability, performance, and se-curity requirements2 Patternsof thestructures ofthe solutions communication pattern, components structure(e.g., modularity), control of physic, Objectsvs logic computation, interacting actors, accessto arch. components, flexibility to add compo-nents, integration of complex software3 Architecturalknowledge Reference architecture, architecture styles, ar-chitecture patterns4 Simplicity simplicity and complexity5 Technologystack technology stack, security protocols, use ofhardware, complexity of software6 Configurationmanagement Configuration management7 No definiteanswerFig. 5. Frequency of critical thinking aspects. architectures of Web applications and IOT-basedsoftware. This subsection reports the results of theanalysis of the replies to the question:
What are thedifferences between designing the architectureof a Web application and the one of an IoTsystem?
Table IV provides the codes that we derivedfrom the responses, which are classified into six categories:architecture drivers, patterns of the solutions’ structures,architectural knowledge, simplicity, technology stack,configuration management, no definite answer. Note thatsome students identified difference in more than onecategory; i.e., a student could discuss performance, whichis an architecture driver, and distribution of the system’scomponents, which is a pattern of the solution’ structure.Figure 5 provides the frequency of the critical thinkingaspects. We observe that the number of students whoexpressed that the two types of systems use differentarchitecture structure patterns is the largest, 48 (43.7%),the number of students who discussed the differences inthe technology stack is 16 (14.4%). This is a good resultsas the students are expected to have limited experiencewith the technology stack but are expected to reason about5
ABLE V
Students’ course expectations vs self-confidence levels.
Expectation 1 2 3 4 5No expectation 10 12 7 7 2Design of architecture 11 12 9 2 2Curious about the topic 6 1 0 1 1Relation to another course 1 3 4 0 0Heavy coding 2 4 5 1 1Types of architecture 0 3 4 0 0Architectures styles 6 10 7 0 0Relate to internship 1 1 0 0 0Design pattern 2 2 2 1 1Theoretical 0 1 0 0 2Group project class 0 0 0 0 1Software development 1 0 0 0 0Architecture evaluation 1 3 0 0 0Design process 7 4 2 1 2Design practices 1 0 0 0 0Other 2 3 1 0 0 the architecture drivers, patterns, and tactics. We observealso that the number of students who did not providedefinitive answers is 26 (23.6%). Some of these 26 studentsreported "they do not know", did not answer the question,or provided non-useful answers such as "I thought this wasa survey, not a test."
VI. Analysis of the relationships betweenself-confidence and course expectations,cognitive levels, and preferred learningmethods, and critical thinking
In this section, we analyze the relationships betweenstudents’ self-confidence levels and their expectations, crit-ical thinking, cognitive levels, and preferred learning meth-ods. We use in this analysis the Chi-square independencetest [30] and the items frequencies.
A. Students’ course expectations.
The Chi-square test confirms a weak association anddependency between the students’ self-confidence levelsand their course expectations, with χ of 79.6, p-value0.04, and Cramer V 0.17. Table V provides the frequenciesof the course expectations vs. self-confidence levels ofthe students. We observe that the students who have noexpectations are either confident (10 students) or havemoderate self-confidence (12 students). The results suggestthat the students who have no expectation, expect thecourse to be about design of architecture expectation, orwere curious about the topic have better self-confidencelevels. This suggests that the course instructors shouldensure that the students have correct expectations fromthe course. B. Critical thinking.
The Chi-square test confirms a weak association anddependency between the students’ self-confidence levelsand their critical thinking, with χ of 39.88, a p-value of0.022, and Cramer V 0.15. Table VI provides the frequen-cies of the students’ critical thinking aspects vs. their self-confidence levels. We observe that the students who havehigh self-confidence discuss more the differences between TABLE VI
Students’ self-confidence levels vs the architectureaspects that they mentioned when differentiating thearchitectures of Web applications and IOT-based software.
Identify Diff. btw. concepts 1 2 3 4 5Architecture drivers 18 10 9 3 1Patterns of the structures of thesolutions 16 21 4 3 4Architectural knowledge 6 17 12 1 1Simplicity 0 1 3 1 1Technology stack 5 3 3 3 2Configuration management 2 0 0 0 0No definite answer 4 7 10 2 3TABLE VII
Relationship between cognitive levels and self-confidencelevels.
Cognition category 1 2 3 4 5Creating 0 2 0 0 0Evaluating 1 3 5 3 2Analyzing 3 0 0 0 0Applying 23 29 22 8 6Understanding 18 18 8 1 1Remembering 0 0 1 0 0Irrelevant 6 7 5 1 3
Web-based and IOT-based applications in terms of archi-tecture drivers and patterns of the solutions’ structuresand, to a lesser frequency, the architecture knowledge andtechnology stack while the students who have moderateself-confidence discuss the differences in the patterns of thestructure of the solutions and the architecture knowledgeand, to lesser frequency, the differences in the architecturedrivers between the two software types. Thus, we observestudents who have high, moderate, and fair self-confidenceare mainly able to identify the differences in the archi-tecture drivers, patterns of the solutions’ structures, andarchitecture knowledge between the two architecture typesand the students who did not express their critical thinkingcapability have mostly fair self-confidence.
C. Student’ cognitive levels.
The Chi-square test confirms the independence betweenthe students’ cognitive levels and their self-confidencelevels, χ of 31.73 and p-value of 0.13-the significance levelis low. Table VII provides the frequencies of the students’cognitive levels vs. their self-confidence levels. We observethat the students who have applying cognitive levels donot necessarily have high self-confidence levels, and thestudents who have understanding cognitive levels do notnecessarily have low self-confidence levels. However, weobserve that the students who have creating and evalu-ating cognitive levels have mostly moderate, fair, and noself-confidence. The paradox that high performers exhibitunder-self-confidence is documented in other domains suchas accounting [31]. A possible reason is that the highperformers know the limit of their abilities. D. Students’ preferred learning methods.
The Chi-square test confirms the independence betweenthe students’ self-confidence levels and their preferred6
ABLE VIII
Students’ self-confidence levels vs preferred learningmethods.
Learning method 1 2 3 4 5Group assignments 4 5 4 3 0Individual assignments 5 4 4 2 1Case studies 4 11 1 1 4Reading 5 8 4 0 1In-class group activities 5 4 4 1 0No definitive answer 9 7 4 1 0Quizzes 2 1 2 0 1Drawing diagrams 6 5 5 0 2Evaluate work of peers 1 1 3 0 0Learning on own 1 1 2 1 1Assignments 0 1 0 1 0Other courses 0 0 0 1 0Practice 8 9 7 2 1In-class posters 0 1 1 0 0Lectures 0 1 0 0 1Live demo 1 0 0 0 0 learning methods, χ of 63.69 and p-value of 0.34. Ta-ble VIII provides the frequencies of the students’ pre-ferred learning methods vs. their self-confidence levels.We observe that the students who prefer to learn fromcase studies have mostly moderate self-confidence level(11 out of 21), the students who prefer practice havemostly high, moderate, or fair self-confidence levels (24out of 27 students); the students who prefer reading havemixed self-confidence levels; and the students who did notprovide a definitive answer have mostly good or moderateself-confidence) While there is no statistical evidence, thedetailed analysis suggests that providing the studentswith practice opportunities helps them gain better self-confidence levels. VII. Impacts and limitations of the study
This paper explores a set of factors that we believe arerelated to undergraduate students’ confidence levels, i.e.,confidence in their abilities to design software architectureafter taking a course on software architecture. The studyfound that the students’ self-confidence is weakly associ-ated with their expectations from the course and theircritical thinking to differentiate between the architecturesof Web-based and IOT-based applications and does notdepend on their cognitive levels and preferred learningmethods. Figure 6 depicts these relationships–the colorindicates the associated factors.We reiterate that the students who have high cognitivelevels did not have high self-confidence levels, and self-confidence is not associated with the cognitive levels. We also did not see significant patterns from the analysisof the students’ answers who do not have confidence intheir ability to design software architecture. We found thatthese students have varying preferred learning methods(including practice), varying expectations from the class,and different cognitive levels. The results suggests that toimprove the self-confidence of the students, the instructorshould ensure that the students’ have "correct" course We note that we cannot correlate the data with the students’assessment scores in the class because we did not request that in theIRB before starting the study. Fig. 6. Self-confidence and expected related factors. The yellow boxesindicates the variables associated with self-confidence.TABLE IX
Relationship between the critical thinking and cognitivelevels.
Differences topics 1 2 3 4 5 6 7Architecture drivers 1 2 1 23 11 0 3Patterns of the structuresof the solutions 0 5 2 26 12 0 3Architectural knowledge 0 1 0 22 9 1 4Simplicity 0 2 0 0 2 0 2Technology stack 1 3 0 10 2 0 0Configurationmanagement 0 0 0 0 2 0 0No definite answer 0 1 0 7 8 0 10 expectations and work on improving the students’ criticalthinking capabilities.The main limitations of the study follow. First, we didnot use a repeatable process to identify the factors thataffect the students’ self-confidence. The factors used inthe study were identified in brainstorming sessions withcolleagues: there would be other factors that impact thestudents’ self-confidence that could be worth studying.Second, the students provided their responses in text,and the authors coded the responses. We acknowledge thatthe coders’ perspective impacts the data extraction, whichapplies to qualitative research, in general. We, however,revisited the data extraction several times to reduce thislimitation. We also cross-checked often the students’ an-swers. For instance, we used Table IX, which describesthe relationships between the students’ cognitive levels andtheir critical thinking. The Chi-square test of dependencyconfirms the dependency between the two factors, χ of57.71 and p-value of 0.01. The table shows that some ofthe students (12 out of 22) are not associated with specificcognitive level but provided differences between the archi-tectures of IOT-based and Web-based applications, whichleads us to double-check our coding.The study shows that self-confidence is associated withthe critical thinking of the students. This suggests thatinstructors can change their students’ self-confidence bygiving them knowledge about alternative solutions forsolving given architecture problems, so they understandthat there are conditions and implications of using archi-tecture knowledge to solve architecture problems beforeasking them to apply architecture design methods [32].The results suggest that the instructors should ensure7hat the students’ expectations are aligned with the coursegoals and try to use case studies that show contrasts, e.g.,performance needs for Web applications and IOT-basedsoftware. VIII. Conclusions
In this paper, the study analyzed the relationshipsbetween students’ self-confidence levels and their expec-tations, preferred learning methods, cognitive levels, andcritical thinking. The study found that the students’ self-confidence levels depend on their expectations from thecourse and their critical thinking capability but did notfind dependency relationships between the self-confidenceand students’ cognitive levels or preferred learning meth-ods.To improve the self-confidence of the students, theinstructor should ensure that the students’ have "correct"course expectations and work on improving the studentscritical thinking capabilities.
Acknowledgment
The authors thank Yesdaulet Izenov for helping withthe survey.
References [1] R. T. Mowday, “Leader characteristics, self-confidence, andmethods of upward influence in organizational decision situa-tions,”
The Academy of Management Journal , vol. 22, pp. 709–725, Dec. 1979.[2] S. Heggen and C. Myers, “Hiring millennial students as softwareengineers: A study in developing self-confidence and marketableskills,” in
Proceedings of the 2nd International Workshop onSoftware Engineering Education for Millennials , p. 32–39, 2018.[3] C. A. Shoemaker, “Student confidence as a measure of learningin an undergraduate principles of horticultural science course,”
HortTechnology , vol. 20, pp. 683–688, 2010.[4] A. Bandura,
Social Foundations of Thought and Action: ASocial Cognitive Theory . Prentice-Hall series in social learningtheory, Prentice-Hall, 1986.[5] D. Druckman and R. A. Bjork,
Learning, Remembering, Be-lieving: Enhancing Human Performance , ch. Self-Confidenceand Performance, pp. 173–206. Washington, DC: The NationalAcademies Press, 1994.[6] B. Hanks, “Student attitudes toward pair programming,”pp. 113–117, 2006.[7] V. Ramalingam, D. LaBelle, and S. Wiedenbeck, “Self-efficacyand mental models in learning to program,” in
Proceedings of the9th Annual SIGCSE Conference on Innovation and Technologyin Computer Science Education , p. 171–175, 2004.[8] U. v. Heesch and P. Avgeriou, “Mature architecting - a surveyabout the reasoning process of professional architects,” in , pp. 260–269, June 2011.[9] A. Tang, M. Razavian, B. Paech, and T. Hesse, “Human aspectsin software architecture decision making,” in , (Gothen-burg, Sweden), April 2017.[10] V. Clerc, P. Lago, and H. van Vliet, “The architect’s mind-set,” in
Software Architectures, Components, and Applications (S. Overhage, C. A. Szyperski, R. Reussner, and J. A. Stafford,eds.), (Berlin, Heidelberg), pp. 231–249, Springer Berlin Heidel-berg, 2007.[11] M. Galster and S. Angelov, “What makes teaching softwarearchitecture difficult?,” in
Proceedings of the 38th InternationalConference on Software Engineering Companion , ICSE ’16,pp. 356–359, 2016.[12] C. R. Rupakheti and S. Chenoweth, “Teaching software archi-tecture to undergraduate students: An experience report,” in
Proc. of the 37th International Conference on Software Engi-neering - Volume 2 , ICSE ’15, pp. 445–454, 2015. [13] M. McCracken, I. Hsi, H. Richter, R. Waters, and L. Burkhart,“A proposed curriculum for an undergraduate software engi-neering degree,” in
Thirteenth Conference on Software Engi-neering Education and Training , pp. 246–257, March 2000.[14] J. Saldaña,
The Coding Manual for Qualitative Researchers .Sage, 2015.[15] M. R. Mehl,
Handbook of multimethod measurement in psychol-ogy , ch. Quantitative Text Analysis, pp. 141–156. AmericanPsychological Association, 2006.[16] M. Gentzkow, B. Kelly, and M. Taddy, “Text as data,”
Journalof Economic Literature , vol. 57, pp. 535–74, Sep. 2019.[17] N. M. C. Valentim, W. Silva, and T. Conte, “The students’perspectives on applying design thinking for the design of mobileapplications,” in , pp. 77–86, May 2017.[18] J. Heywood,
Empowering Professional Teaching in Engineering:Sustaining the Scholarship of Teaching . Synthesis Lectures onEngineering, Morgan and Claypool, March 2018.[19] U. van Heesch and P. Avgeriou, “Naive architecting - under-standing the reasoning process of students,” in
Software Ar-chitecture: 4th European Conference, ECSA 2010, Copenhagen,Denmark, August 23-26, 2010. Proceedings (M. A. Babar andI. Gorton, eds.), (Berlin, Heidelberg), pp. 24–37, Springer BerlinHeidelberg, 2010.[20] C. Schriek, J. M. E. van der Werf1, A. Tang, and F. Bex,“Software architecture design reasoning: A card game to helpnovice designers,” in
Proc. 10th European Conference on Soft-ware Architecture (ECSA) , (Copenhagen, Denmark), pp. 22–38,Dec. 2016.[21] N. Ali and C. Solis,
Exploring How the Attribute Driven DesignMethod Is Perceived , pp. 23–40. Morgan Kaufmann, 2014.[22] L. Ben Othmane and M. Lamm, “Mindset for software architec-ture students,” in , vol. 2, pp. 306–311,Jul 2019.[23] H. Cervantes and R. Kazman,
Designing Software Architectures:A Practical Approach . Addison-Wesley Professional, 1st ed.,2016.[24] C. Hofmeister, P. Kruchten, R. L. Nord, H. Obbink, A. Ran,and P. America, “Generalizing a model of software architecturedesign from five industrial approaches,” in , pp. 77–88, 2005.[25] V. Braun and V. Clarke, “Using thematic analysis in psychol-ogy,”
Qualitative Research in Psychology , vol. 3, no. 2, pp. 77–101, 2006.[26] L. M. Connelly, “What is phenomenology?,”
MedSurg Nursing ,vol. 19, no. 2, p. 127– 129, 2010.[27] N. Bricki and J. Green, “A guide to using qualitative researchmethodology,” 2 2007.[28] L. W. Anderson, D. R. Krathwohl, P. W. Airasian, K. A.Cruikshank, R. E. Mayer, P. R. Pintrich, J. Raths, and M. C.Wittrock,
A taxonomy for learning, teaching, and assessing :a revision of Bloom’s taxonomy of educational objectives . NewYork, US: Pearson, 2000.[29] M. Harvey, C. Baumann, and V. Fredericks, “A taxonomy ofemotion and cognition for student reflection: introducing emo-cog,”
Higher Education Research & Development , vol. 38, no. 6,pp. 1138–1153, 2019.[30] W. G. Cochran, “The χ test of goodness of fit,” The Annals ofMathematical Statistics , vol. 23, no. 3, pp. 315–345, 1952.[31] S. P. Ravenscroft, T. R. Waymire, and T. D. West, “Accountingstudents’ metacognition: The association of performance, cali-bration error, and mindset,”
Issues in Accounting Education ,vol. 27, no. 3, pp. 707–732, 2012.[32] C. Hofmeister, P. Kruchten, R. L. Nord, H. Obbink, A. Ran, andP. America, “A general model of software architecture designderived from five industrial approaches,”
Journal of Systems andSoftware , vol. 80, no. 1, pp. 106 – 126, 2007., vol. 80, no. 1, pp. 106 – 126, 2007.