Important Dates
Event | Date | Timezone |
---|---|---|
Submission | AoE | |
Notification | January 6, 2023 | AoE |
Camera-Ready | January 13, 2023 | AoE |
Right now, it is:
Motivation
Software is developed by humans and for humans - but often without properly considering humans. A large part of empirical studies in software engineering is based on human-centric experiments - the empirical evaluation of tools, processes, or models that involve human participants (e.g., the usability evaluation of development environments and other tools). Those studies are designed to the best of the researchers’ knowledge; yet because their primary field is usually computer science rather than humanities, these experiments are often suboptimal from a methodological perspective - they are conducted ad hoc with students or the researchers themselves. In contrast, human studies techniques are well established in psychology, social sciences, or other fields of humanities.
The 2nd Workshop on Advances in Human-Centric Experiments in Software Engineering (HUMAN 2023) aims to bring together researchers interested in how the discipline of software engineering can benefit from human participation through human-centric experiments. Therefore, the workshop strives to actively involve researchers from fields other than computer science (e.g., psychology or social sciences) who have in-depth methodological knowledge of applicable techniques for human-centered experiments.
Topics of Interest
The contributions should be of direct interest to software engineering, especially for the areas of analysis, development, and (re)engineering. In particular:
- Empirical evaluations of software tools, methods, and (re)engineering.
- Human aspects in software development, analysis, and evolution, including collaborative software (re)engineering practices.
- Research methods for conducting experiments in software development that focus on humans.
- Objective measurement and assessment for human aspects in software development practices.
- Qualitative analysis in software analysis, development, and (re)engineering.
- Social aspects of software engineering practice, including gender, equity, and diversity.
Additional topics involving both software engineering and human aspects are also welcome.
Categories
Submitted papers should fit into one of the following categories:
- Empirical Papers: Empirical utilization or evaluation of theoretical or practical tools, methodologies, or techniques of humanities to address software engineering related problems and research issues. We encourage researchers to submit parts of their already presented works about the successful application of humanities related methodologies. Negative results are also welcomed.
- Methodological Papers: Adaptation of theoretical or practical frameworks, mindsets, or methodologies of humanities in the context of software engineering related problems and research issues. Submissions may contain a methodological description of the possible application of humanities related methods without any empirical evaluation yet.
Submission
All submissions need to:
- … be in english
- … not exceed 8 pages including references, figures, and appendices
- … come in PDF format
- … be uploaded electronically in via EasyChair
- … conform to the IEEE Conference Proceedings Formatting Guidelines
- … comply with the IEEE Policy on Authorship
As we follow the full double-blind review process as determined by the main conference, submitted papers also need adhere the following rules:
- Omit author names and affiliations.
- Formulate references to the authors’ own work in the third person (e.g., “We build on the work of …” instead of “We build on our previous work …”). It may happen that the current submission clearly links to one of your previous papers, so that despite the third person form, the reviewers will clearly link the authorship of such previous work to the current submission. In this case, you may decide to anonymize the reference itself at the time of submission. For example, “based on previous results [10] …” where the reference is given as “[10] Anonymous authors. Omitted by double blind check”. However, make sure that the paper is self-contained and that its content can be reviewed and understood without access to the earlier work.
- Let out acknowledgements of people, grants, organizations, or anything that would give away your identity. You can, of course, add these acknowledgements in the camera-ready version of your paper.
- Modify naming conventions or project names if they might unblind individual authors and their institutions but indicate that the change is due to the double-blind review. For example, if your project is called “GoogleDeveloperHelper”, which makes it clear the work was done at Google, for the submission version, use the name “DeveloperHelper” or “BigCompanyDeveloperHelper” instead.
- Avoid mentioning the institution or organization where the work was done. For example, if the evaluation involves a user study of students in the CS 101 course you teach, you might say, “The study participants consist of 200 students in an introductory CS course.” You can, of course, add the institutional information in the template.
- Avoid linking directly to code repositories or tool deployments that may reveal your identity. You can either indicate that you will only provide a link to the code or deployment in the camera-ready version or include links to anonymized repositories. When creating such repositories, it may be good practice to ask someone on your team to test the anonymization of the repository and its content.
Program committee members are asked to consider the principle of double-blinding when reviewing papers and therefore not to require full availability of artifacts at the time of submission. Submissions that do not meet the above formatting, submission, or double-blind specifications or that are not will be rejected without review.
Publication
All accepted workshop papers will be published together with the SANER 2023 proceedings.