adamtwar
Joined 1 year ago
1
Karma
0
Posts
18
Replies
About
None
History
-
In reply to Anthropic Releases Claude 3.5 Sonnet, The First Release in Claude 3.5 Family, Outperforms GPT-4o on BenchmarksIn Claude 3.5, Anthropic introduced the term "artifact" so that they could instruction-fine-tune the model on the term, which is different than common terms like "asset" or "attachment", and not very…1 year, 5 months
-
In reply to Microsoft Copilot Pro vs ChatGPT PlusBasically, I think the UX of ChatGPT (both web & mobile app) has its flaws but is by far the best out of end-user LLM offerings I've tried (at the paid tier): ChatGPT, Google Gemini, You.com, Claude.…1 year, 9 months
-
In reply to Microsoft Copilot Pro vs ChatGPT PlusHowever, only the ChatGPT service (website & iOS app) let me paste pretty long inputs into the text box. Poe & You.com tell me that the prompt is too long and tell me to "upload a TXT file" instead, …1 year, 9 months
-
In reply to Microsoft Copilot Pro vs ChatGPT PlusI AM really impressed with You.com Pro which is $15/month, has unlimited access to both GPT-4 and GPT-4 Turbo (you can explicitly choose), also to Claude 2 & Gemini Pro, and it has several websearch-…1 year, 9 months
-
In reply to Microsoft Copilot Pro vs ChatGPT PlusThe limitation of 4000 chars in Copilot makes many prompts unfeassble. On the other hand, Copilot Pro gives me access to "GPT-4 Creative" (the REAL GPT-4), "GPT-4 Turbo Creative", and "GPT-4 Precise"…1 year, 9 months
-
In reply to Microsoft Copilot Pro vs ChatGPT PlusMy MS Copilot Pro iOS app gives me just 4000 characters input, which is not great. On the other hand, its web search integration seems much better than that of ChatGPT 4, but worse than You.com Pro e…1 year, 9 months
-
In reply to Groq's Language Processing Unit (LPU™) is World's Fastest LLM Speed (Alpha Demo)Groq has an API waitlist at https://console.groq.com/ — its speed is truly insane, especially with Mixtral 4x7b. Mixtral is a very able model and I think if served via Groq, it can truly serve as a…1 year, 9 months
-
In reply to Groq's Language Processing Unit (LPU™) is World's Fastest LLM Speed (Alpha Demo)> https://thisdayinai.com/post/105-groqs-language-processing-unit-lputm-is-worlds-fastest-llm-speed-alpha-demo/ Apart grom the Groq website, these models are available through the Poe.com subscripti…1 year, 9 months
-
In reply to Gemini 1.5 outshines GPT-4-Turbo-128K on long code prompts, HVM author> https://thisdayinai.com/post/104-gemini-15-outshines-gpt-4-turbo-128k-on-long-code-prompts-hvm-author/ This is comparing Gemini Pro 1.5 MoE with GPT-4 Turbo, which is a fair comparison. GPT-4 Turb…1 year, 9 months
-
In reply to Groq's Language Processing Unit (LPU™) is World's Fastest LLM Speed (Alpha Demo)Wrong thread1 year, 9 months
-
In reply to Groq's Language Processing Unit (LPU™) is World's Fastest LLM Speed (Alpha Demo)This is comparing Gemini Pro 1.5 MoE with GPT-4 Turbo, which is a fair comparison. GPT-4 Turbo isn't GPT-4, and Gemini Pro isn't Gemini Ultra. However, I have a strong feeling that Google has worke…1 year, 9 months
-
In reply to Google Announces Gemini 1.5, with a new Mixture-of-Experts (MoE) architecturePeople have demonstrated that LLMs with 32k-100k input contexts can (with the right prompting) summarize the "narrative flow" of a long text. If they can summarize, then they can also do things like …1 year, 9 months
-
In reply to Google Announces Gemini 1.5, with a new Mixture-of-Experts (MoE) architectureObviously the same is true for all kinds of "consistent text corpus" work: the StarTrek creators can stuff the entire collection of scripts for all the StarTrek movies and shows, and have the model s…1 year, 9 months
-
In reply to Google Announces Gemini 1.5, with a new Mixture-of-Experts (MoE) architectureA context window of 1M tokens or even 10M tokens is extremely useful for literature creation. Let's say anauthor has 5 novels she has written, and she wants to use an LLM as a writing aid for the 6th…1 year, 9 months
-
In reply to YOUR THOUGHTS: Gemini AdvancedI cannot find it anymore but after Google introduced the Gemini "Advanced" chatbot plan, someone from Google said that for some operations like image analysis, the chatbot still uses the Gemini Pro m…1 year, 10 months
-
In reply to New GPT-4 Preview Model Designed to Stop Laziness is Actually More LazyI've been using "ChatGPT 4" (= GPT-4 Turbo 128K/4K with automatic tool switching) vs. "ChatGPT Classic" (= GPT-4 8K) for coding in ChatGPT for the last months, and the Turbo model has always been "la…1 year, 10 months
-
In reply to New GPT-4 Preview Model Designed to Stop Laziness is Actually More LazyOpenAI made this all very confusing. Many people think that the GPT-4 Turbo models are supposed to be an "improvement" over GPT-4: no, they're not, they're a different line of models: faster but much…1 year, 10 months
-
In reply to New GPT-4 Preview Model Designed to Stop Laziness is Actually More LazyThe "GPT 4 Turbo" models are nothing like the older "GPT 4" models: they're very obviously distilled and/or quantized. Also, Turbo have 128K input context but a max of 4K output. The classic GPT-4 (a…1 year, 10 months

