AI-Assisted Coding in Python and Collaborative Project Development with Git

Schliessen Icon
Kite Award 2026
NomineeLogo KITE Nominee

Innovedum

AI in Teaching &
Learning

Many students need to create computer programs but lack formal training. This course turns AI into a responsible coding partner: students learn Python with short practice loops, build tests and documentation, and collaborate via Git to deliver open, reusable and reproducible software. Through hands-on practise, open and honest exchange between students and instructors and a flexible learning environment, the course empowered students to use genAI in solving complex, real world problems. By semester’s end, 10 diverse teams created transparent, domain-relevant projects and students left with practical habits they can transfer to their theses and other research.

Implementation of the Course

Teaching mode: The course offered hybrid access with strong in-person encouragement. Each 90-min session combined short inputs with immediate practice. We ran two structured feedback rounds and redesigned the format twice during the semester. The final format included mini-task loops in the first half (3-4 slides followed by hands-on task and TA coaching) and a longer applied block in the second half. Students reported appreciating this agility and responsiveness. Active teaching vs passive support. Sessions centered on coding together. Short explanations were followed by supervised implementation while the lecturer and TAs circulated to unblock students and model debugging, testing, and documentation habits. The final format devoted most of the contact time to active work. The feedback to students was achieved in-class using rapid debriefs after each mini-task, live code walkthroughs and engagement in dialogue between instructors and peers. During assignments the students were required to perform self-assessment on learning and challenges, plus peer review guided by a clear rubric. The student engagement and participation was fostered by questions throughout the lecture, supported by EduApp polls for those less comfortable to respond “on the spot.”. In addition we implemented frequent pair-work and small-group tasks to promote collaboration. In several occasions we used the PBLabs for a more project-oriented and collaborative space or the students. Moodle was the central hub for announcements, materials, and Q&A. We implemented dedicated chat/forums to support student-student help, especially during the project phase. The instructor and TA’s coordinated the chats to ensure consistent guidance across sessions. The project slides were hosted on an ETH Gitlab platform.

The synchronous vs asynchronous elements of teaching were: • Synchronous: weekly, hands-on sessions (mini-tasks + longer applied exercise), TA coaching, quick polls. • Asynchronous: take-home assignments with reflection and peer review; team project work on Git, ongoing forum discussion, office hours to meet lecturer and TA’s. Student support was offered by continuous TA presence during class, prompt forum replies, transparent examples and solutions after debriefs to consolidate learning. The assessment strategy for the course included (1) take-home assignments (30%) where the students simply had to attempt a solution, perform self-assessment and a (rubric-based) peer review of another assessment to get the grade; (2) an AI-assisted exam (30%) where supervised use of LLMs was allowed to emphasize problem understanding, verification, and code quality; and (3) a final team project (40%) where the students teamed up to create a documented, modular repository with tests and Git history evidencing collaboration. Challenges & lessons learned. Student input prompted (i) shorter, more frequent exercises; (ii) earlier and steadier Git practice; (iii) clearer project scope and early publication of the rubric; (iv) more prompt-engineering practice to better align with the AI-enabled exam; and (v) sharing exercise solutions post-class. These changes were implemented during the semester and positively noted by students (“asked for feedback and integrated the week after”).

He asked us for feedback and integrated it the week after.
course participant

Motivation, Project Mission and Vision

Research across mainy disciplines, including Earth & planetary sciences is data-rich, yet most non computer science students haven’t been taught how to turn ideas into robust software. Generative AI lowers the entry barrier but also creates uncertainty about responsible use. On top of this, collaborating with fellow students over software is highly overlooked. The mission of this course was to equip students with tools and methodologies to safely and effectively leverage AI as a coding partner so that they can produce reusable, reproducible and readable scientific software. The course focuses on responsible and ethical use of genAI, transparend and open use, collaboration and peer learning, and fostering a dynamic, evidence-based teaching environment where hannds-on learning is in the foreground. The goals of the course during the semester were to: 1. Build core Python skills for scientific work while emphasizing the 3R’s: readability, reproducibility, reusability of code. 2. Develop judgment about when and how to use genAI for software (prompting, verification, testing) 3. Teach Git workflows for transparent teamwork and enable students to ship a working, documented team project solving a real problem in their domain. Beyond the course, the long-term vision is to create graduates who can responsibly harness AI to create open, trustworthy research software, thereby raising the quality and velocity of scientific problem-solving across the Earth Sciences and beyond.

Innovative Elements

AI-assisted coding as a core skill: students learn to prompt, critique, and improve AI-generated code by employing testing strategies, thorough documentation, modular design and proactive problem solving. 3R rubric guides all work (readability, reproducibility, reusability) and is also used for peer review to build evaluative judgment. Students were routinely engaged and provided self-assessment and peer review for all assignments, fostering the 3R practises. Live “mini-task” loops: Dynamic teaching where content is presented in 3-4 slides, followed by immediate coding task. Students are coached by TA’s, and task ends with rapid debrief. Team science with Git: Team project that leverages Git for transparent collaboration and traceability. Responsive class design: we twice re-designed the format mid-course based on student input. A first-of-its-kind AI-assisted exam that explicitly allowed use of genAI, focusing on problem understanding, verification, and code quality.

It helped me a lot to improve my programming skills.
course participant

Effects on Student Learning

In the formal questionnaire at the end of the course, the students rated engagement and clarity at 4.7/5 and interaction 4.9/5. They also agreed that exercises supported the understanding or course content (4.5/5), and their interest at the start was already high (4.5/5). Out of 38 students, the overall mean grade was 5.15, with a range from 4.25 to 5.75 and no fails. The final team projects covered a wide range of applications and showcased use of Git to deliver documented, tested code, developing transferrable skills other work (e.g., theses or future projects). The evidence of the course success comes from strong student evaluations, positive open comments about relevance and teamwork, and a grade distribution centered on good to very good, with zero failures. We also run self-assessments on assignments to surface misconceptions and adapt the teaching approach.

Great and very worthwhile course… content is excellent.
course participant

ETH Competence Framework

It gave me a bunch of interesting insights into AI, teamwork and Git that I didn't have before.
course participant

loading