
Primarily interested in Go, Python, Swift, Android (Java) and Web stuff. So, relatively mainstream languages, nothing too esoteric.
What have people found to be the best copilot paid subscription, at the moment?
What have people found to be the best copilot paid subscription, at the moment?
I have been a big user of ChatGPT-4 ever since it came out. I have used in various forms. Through the chatbot, through GitHub Copilot etc. For the past year, I have looked high and low to see an alternative that performed better. Nothing came close. Until a few days ago with Anthropic released Claude 3. For software development work, it is very very impressive. I've been running GPT-4 and Claude 3 Opus side by side for a few days. And I have now decided to cancel all AI subscriptions except Claude 3.
For software development work, no other model comes close.
+1, Claude 3 Opus is a game changer.
Since OP asked about a Copilot product, we (YC W23) actually built https://double.bot to do exactly this: high quality UX + most capable models. Although we don't have a subscription right now, it's free for the time being :)
At a glance, Double seems to be similar to Sourcegraph Cody - but Cody is half the price. What does Double offer that Cody doesnt?
Don’t have a subscription? What about this? [0]
Not that I mind a subscription, I’d pay it in a heartbeat if there were a JetBrains plugin.
We implemented the subscription a few hours ago, and still hold a very generous free tier. Unfortunately, we learnt that it's not possible to have an uncapped free tier, we saw several abusers use our API to run their own apps which is obviously against the spirit of a coding copilot :)
JetBrains plugin is on the roadmap, soon!
Awesome, I'll be sure to try it out when the JetBrains plugin is available.
Looks great! Would you have plans to release a jetbrains plugin in future?
Free is even better ;), will take a look thanks..
For those who have read Vonnegut, the Claude logo is hilarious.
Any plans for Emacs support?
How do you get access to Claude 3? It doesn’t seem to be available yet.
I've only used it through third party services but I think it's available both on their site and as an API. Can't confirm though, I'm in Europe.
https://double.bot for VS Code works really well for me. https://writingmate.ai/labs for typical browser chat ui (looks ok on mobile too).
Wasn't aware that it's not widely available but if you can't find it anywhere else, we have it available in our free tier on Double: https://double.bot
Are you planning to offer plugins for other IDEs, like IntelliJ?
Only Sonnet, unless there is a special release group that have access to the other models.
Not sure what bedrock is, but last time I looked at their website it claimed it isn't available in my region.
I'm in Ireland, so I interpreted that as "Not out yet".
Here is a list of countries Claude is available: https://support.anthropic.com/en/articles/8461763-where-can-...
> I'm in Ireland, so I interpreted that as "Not out yet".
What I interpret it as is "Not compliant with GDPR yet".
Another option: Kagi's chat feature lets you choose Claude 3 Opus (or GPT-4 or Mistral).
You have to pay for Kagi Ultimate ($25/month) to access it I think.
claude.ai
OpenAI API key with GPT4 plus aider[1]: https://github.com/paul-gauthier/aider
For reference against the actual product called “Copilot”, I would say this is actually useful, vs Copilot which I would use words like “glorified autocomplete” and “slightly better intellisense” to describe. Only really good for saving you some rote typing periodically.
The primary limit with “aider” is the GPT4 context window, so really it’s about learning to work within that as its only learning curve.
I have been curious about sourcegraph’s “Cody” product if anyone here has tried it
re Cody - I found it very similar to Codium, Copilot, et al. Impressive when it works, but consistently has trouble identifying the right context to inject or using it effectively. At its best when writing code that does not require a lot of local context - speeding up standard operations that might need a lot of boilerplate or using publicly documented + stable APIs are where they shine.
re getting some keys - for most tasks I have not yet found a better tool than finding a chat interface to your liking, setting a fairly generic “you’re an expert programmer” system prompt with output instructions tuned to your needs, and manually adding relevant context (copy/paste) to your messages when relevant. I can’t wait for big content windows and RAG methods to improve enough to replace all that with one of assistants, but it’s just not there yet.
I love aider but it feels like a giant black box that consumes copious credits.
The output is amazing though, so I keep using it. But I feel like I have no insight or control. Maybe I should read the docs more carefully.
I use mine in earnest with the full context window filled prompting probably 20-30x a week and have not gotten a monthly bill over $5 USD in a long while. When I first started using it when GPT4 access was limited last year and the tokens were expensive it was a lot more. I think my bills were around $200 USD per month at that point
I switched from GitHub Copilot to Sourcegraph and have not wanted to switch back yet.
The output is more or less of similar quality as far as I can tell. However, what makes me like Sourcegraph's Cody more is the more advanced interface it gives me in VSCode. It allows me to do all the things I want (get quick answers to code-related questions, refactor code, write tests, and explain code that's confusing) all one hot key away.
In addition to paid subscriptions, you might want to consider running an LLM locally. One of a number of projects enabling this approach is "continue" - https://github.com/continuedev/continue
The issue with local IMO is that you'll going to have to compromise on quality. And I really don't want to waste my time talking with inferior models when better models are widely available.
Yeah, at the time of writing this comment it feels like if you want to easily convince me that your local LLM is good enough, just do a side by side comparison of prompts with your LLM vs GPT4. It doesn’t even have to be better. It just has to be kind of in the same ballpark
I'd like to try this, do you have some prompts you like to use for evaluation?
people who think that their local LLM's are in the same ballpark as GPT or Claude are delusional
Haven't found any good local model yet that rivals or come close to gpt-4
I have dabbled with the quant version (exl), still not the same. Which quant/version are you using? There are so many that the output shifts dramatically!
I'm using gguf q5 and I find the output degrades going to even q4. Mistral has said they still intend to open newer bigger models, so I'm really looking forward to that.