AI Chatbots: Creating a WordPress Plugin – Functionality, Readability, and Adherence to Coding Standards

admin Avatar

·

·

What to Know:

– Six AI chatbots were tasked with creating a simple WordPress plugin.
– The chatbots used in the experiment were ChatGPT, Bard, Bing, Claude 2, Code Llama, and Llama 2.
– Each chatbot was given the same prompt and asked to generate code for a WordPress plugin.
– The generated code was evaluated based on its functionality, readability, and adherence to WordPress coding standards.
– The chatbots were also evaluated on their ability to understand and respond to follow-up questions.

The Full Story:

Creating a WordPress plugin can be a complex task that requires knowledge of coding and familiarity with WordPress coding standards. To test the capabilities of AI chatbots in this area, an experiment was conducted to see how well six different chatbots could create a simple WordPress plugin.

The chatbots used in the experiment were ChatGPT, Bard, Bing, Claude 2, Code Llama, and Llama 2. Each chatbot was given the same prompt, which asked them to generate code for a WordPress plugin that would display a random quote on a website.

The generated code from each chatbot was evaluated based on three criteria: functionality, readability, and adherence to WordPress coding standards. The functionality of the code was assessed by testing it on a WordPress website to ensure that it worked as intended. The readability of the code was evaluated based on how well it was organized and commented. Finally, the adherence to WordPress coding standards was assessed by checking if the code followed the recommended practices and guidelines.

In terms of functionality, all six chatbots were able to generate code that successfully displayed a random quote on a website. However, there were differences in the quality of the code generated. ChatGPT, Bard, and Bing produced code that was functional but had some minor issues, such as missing error handling or inefficient code structure. Claude 2, Code Llama, and Llama 2 generated code that was more robust and efficient.

When it came to readability, the chatbots varied in their ability to generate code that was easy to understand and maintain. ChatGPT and Bard produced code that was difficult to read and lacked proper organization. Bing, Claude 2, and Code Llama generated code that was relatively readable, with clear comments and logical structure. Llama 2 stood out as the best in terms of readability, producing code that was well-organized and easy to follow.

In terms of adherence to WordPress coding standards, the chatbots again showed varying levels of performance. ChatGPT, Bard, and Bing produced code that did not fully comply with WordPress coding standards, with issues such as inconsistent naming conventions and lack of proper sanitization. Claude 2, Code Llama, and Llama 2 generated code that followed the recommended practices and guidelines more closely, with proper naming conventions and sanitization.

In addition to evaluating the generated code, the chatbots were also assessed on their ability to understand and respond to follow-up questions. ChatGPT and Bard struggled to understand and provide accurate responses to follow-up questions, often providing irrelevant or incorrect information. Bing, Claude 2, and Code Llama performed better in this aspect, providing more accurate and relevant responses. Llama 2 showed the best performance in understanding and responding to follow-up questions, demonstrating a good understanding of the task and providing helpful information.

Overall, the experiment showed that AI chatbots can generate code for a simple WordPress plugin, but their performance varies in terms of functionality, readability, adherence to coding standards, and understanding of follow-up questions. While some chatbots produced code that was functional but lacked readability and adherence to coding standards, others generated code that was more robust and efficient. Llama 2 stood out as the best performer in terms of readability and understanding of follow-up questions.

The results of this experiment highlight the potential of AI chatbots in assisting with coding tasks, but also emphasize the importance of human review and refinement to ensure the quality and adherence to coding standards. AI chatbots can be a valuable tool for generating code, but human intervention is still necessary to ensure the code meets the desired standards and requirements.

Original article: https://www.searchenginejournal.com/creating-wordpress-plugin-ai-chatbots/495595/