I posted this in another thread,but I think it better belongs here:
"So Gemini 3 Pro dropped today, which happens to be the day I proofread a historical timeline I'm assisting a PhD with. I do one pass and then realize I should try Gemini 3 Pro on it. I give the same exact prompt to 3 Pro as Claude 4.5 Sonnet. 3 pro finds 25 real errors, no hallucinations. Claude finds 7 errors, but only 2 of those are unique to Claude. (Claude was better at "wait, that reference doesn't match the content! It should be $corrected_citation!). But Gemini's visual understanding was top notch. It's biggest flaw was that it saw words that wrapped as having extra spaces. But it also correctly caught a typo where a wrapped word was misspelled, so something about it seemed to fixate on those line breaks, I think. A better test would have been 2.5 Pro vs. 3.0"
After continuing to use it, I genuinely think "It's a good model sir" and plan to add it to my rotation.
I have a frontend code in VUE that had some obvious visual styling problems. I asked it to fix them by providing the screenshot.
Gemini kept switching between two versions, both looked wrong. When I asked it to fix the problems, like for example the buttons are two big and doesn't match the overall theme of the ui, it just toggle the other version of implementation, which had another set of visual problems.
I switched back to claude code to fix those issues, still not in one go, but I seemed to be smoother.
Today, I asked gemini to start a project from scratch by looking at a reference code. it told me that the implementation had done and it had compiled and run it, but I saw tons of compiling errors.
I posted this in another thread,but I think it better belongs here:
"So Gemini 3 Pro dropped today, which happens to be the day I proofread a historical timeline I'm assisting a PhD with. I do one pass and then realize I should try Gemini 3 Pro on it. I give the same exact prompt to 3 Pro as Claude 4.5 Sonnet. 3 pro finds 25 real errors, no hallucinations. Claude finds 7 errors, but only 2 of those are unique to Claude. (Claude was better at "wait, that reference doesn't match the content! It should be $corrected_citation!). But Gemini's visual understanding was top notch. It's biggest flaw was that it saw words that wrapped as having extra spaces. But it also correctly caught a typo where a wrapped word was misspelled, so something about it seemed to fixate on those line breaks, I think. A better test would have been 2.5 Pro vs. 3.0"
After continuing to use it, I genuinely think "It's a good model sir" and plan to add it to my rotation.
Feels like both tools have their own strengths
I tried it via their antigravity code editor.
I was expecting better.
I have a frontend code in VUE that had some obvious visual styling problems. I asked it to fix them by providing the screenshot.
Gemini kept switching between two versions, both looked wrong. When I asked it to fix the problems, like for example the buttons are two big and doesn't match the overall theme of the ui, it just toggle the other version of implementation, which had another set of visual problems.
I switched back to claude code to fix those issues, still not in one go, but I seemed to be smoother.
Today, I asked gemini to start a project from scratch by looking at a reference code. it told me that the implementation had done and it had compiled and run it, but I saw tons of compiling errors.
So in your experience, Claude performs better than Gemini 3 when it comes to coding?