I tested Google Bard’s new coding skills. It didn’t go well
Previously, we discussed how Bard can provide some coding help to programmers, but couldn’t code. That’s changed. As of Friday, Google has announced Bard can code. But can it code well?
Let’s find out.
Also: The best AI art generators to try
To come up with an answer, I’m going to run some of the coding tests I gave to ChatGPT. We’ll see how Bard does, and compare the results.
Writing a simple WordPress plugin
My initial foray into ChatGPT coding was with a WordPress PHP plugin that provided some functionality my wife needed on her website. It was a simple request, merely asking for some submitted lines to be sorted and de-duped, but when ChatGPT wrote it, it gave my wife a tool that helped her save time on a repetitive task she does regularly for work.
Here’s the prompt:
Write a PHP 8 compatible WordPress plugin that provides a text entry field where a list of lines can be pasted into it and a button, that when pressed, randomizes the lines in the list and presents the results in a second text entry field with no blank lines and makes sure no two identical entries are next to each other (unless there’s no other option)…with the number of lines submitted and the number of lines in the result identical to each other. Under the first field, display text stating “Line to randomize: ” with the number of nonempty lines in the source field. Under the second field, display text stating “Lines that have been randomized: ” with the number of non-empty lines in the destination field.
And here’s the generated code that Bard wrote:
So far, it looks good. But not so much. The UI is not formatted properly. Worse, the plugin doesn’t work. Clicking the Randomize button just results in both fields being cleared. That’s it.
By contrast, ChatGPT built a fully functional plugin right out of the gate.
Fixing some code
Next, I tried a routine I’d previously fed into ChatGPT that came from my actual programming workflow. I was debugging some JavaScript code and found that I had an input validator that didn’t handle decimal values. It would accept integers, but if someone tried to feed in dollars and cents, it failed.
Also: What is generative AI and why is it so popular? Here’s everything to know
I fed Bard the same prompt I fed ChatGPT, and this is what resulted:
The code generated here was much longer than what came back from ChatGPT. That’s because Bard didn’t do any regular expression calculations in its response and gave back a very simple script that you’d expect from a first year programming student.
Also: How to use ChatGPT to write Excel formulas
Also, like something you’d expect from a first year programming student, it was wrong. It properly validates the value to the left of the decimal, but allows any value (including letters and symbols) to the right of the decimal.
Finding a bug
During that same programming day, I encountered a PHP bug that was truly frustrating me. When you call a function, you often pass parameters. You need to write the function to be able to accept the number of parameters the originating call sends to it.
Also: How to use Midjourney to generate amazing images
As far as I could tell, my function was sending the right number of parameters, yet I kept getting an incorrect parameter count error. Here’s the prompt:
When I fed the problem into ChatGPT, the AI correctly identified that I needed to change code in the hook (the interface between my function and the main application) to account for parameters. It was absolutely correct and saved me from tearing out my hair.
I passed Bard the same problem, and here’s its answer:
Wrong, again. This time, Bard simply told me that the problem I was having was a mismatch of parameters, and I needed to pass the donation ID. That was a wrong answer. Once again, ChatGPT succeeded and Bard failed.
For the record, I looked at all three of Bard’s drafts for this answer, and they were all wrong.
‘Hello, world’ test
Last week, I asked ChatGPT to generate code in 12 popular programming languages (including Python) to display “Hello, world” ten times, and to determine if it was morning, afternoon, or evening here in Oregon. ChatGPT succeeded for the mainstream languages.
Also: This new technology could blow away GPT-4 and everything like it
I fed the same prompt to Bard. Since it has been wrong on everything so far, I just picked one language to test, asking it to generate some Python code:
Although Bard’s method for determining time was a bit more convoluted than it needed to be, the result was workable.
So, can Bard code?
Bard can definitely write code. But in three of my four tests, the code it wrote didn’t work properly. So I wouldn’t necessarily say that Bard can code.
I’ll tell you this. If I were hiring a programmer and gave them the above four assignments as a way of testing their programming skills, and they returned the same results as Bard, I wouldn’t hire them.
Also: Generative AI is changing your tech career path. What to know
Right now, Bard can write code… like a first year programming student who will probably get a C grade for the semester.
Given how good ChatGPT is, Google’s answer is … embarrassing.
You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.