Student groups and faculty caution against cracking down on the use of artificial intelligence (AI) in the university as the UP System drafts its AI use policy.
For Migo Pagdanganan, vice chair of Agham Youth Engineering, the university must be careful not to demonize AI tools, especially when there is no accurate way to detect if the content submitted by a student is generated by AI. Instead, any AI policy should focus on AI being a productive tool for education.
“At the end of the day, there is no AI policy that can completely solve the issue of cheating and plagiarism as long as the education system remains traditionally outcome-based and as long as the education system does not pay attention to the particular contexts and learning processes of its students,” he said in an interview with the Collegian.
Earlier this year, the UP Diliman (UPD) AI Program released a statement encouraging the university to revisit academic dishonesty guidelines to include AI tools after a professor accused a student on Facebook of using generative AI in their final exam.
Generative AI, such as ChatGPT, is a type of AI that generates new content, such as text or images, based on prompts from the user. AI models are trained using big datasets.
UP has since outlined its Principles for Responsible Artificial Intelligence, the first university in the country to do so. Safety, accountability, and the use of AI for the public good are among the principles listed in the draft policy.
For Johnrob Bantang, director of the UP Computational Science Research Center and an associate professor of physics, cases of intellectual dishonesty may already be solved using existing policies and guidelines.
“We currently have enough academic policies that will enable us to navigate this new environment of AI, I think. It’s only a matter of finding them, and it’s all embedded, at least in UPD, in the faculty manual and the student [code],” he said.
The proposed principles also outlined concerns about the use of AI in research, in terms of ethics as well as its dangers. However, Bantang said UP’s AI policy must not be too restrictive in regulating AI research.
“The pressing concern is an overdoing of the policy. While these technologies are new, we should allow people, especially at the national university of the Philippines, to explore and be allowed to check all the possibilities. Not that while it’s still beginning, we already prune the possibilities,” he said.
While UP has asked members of the academic community to comment on the proposed AI policy, Pagdanganan said that it must take more active discussions with organizations and student councils.
“There is no use in creating comprehensive AI policies if those who benefit from these are primarily the ruling elite instead of the broad masses. The perspective of the university should be, ‘How can we use AI to serve the people?’ That should be the guiding principle of the university for any emerging and disruptive technology,” Pagdanganan said. ●