I always tend to think in rpgs what gender seems to fit the kind of a playthrough I’m going for. For some reason if I do something like wood elf ranger it would def be a female. Bulky warrior with a 2h axe? Guy. 1h shield paladin who smites the unjust? Guy. Someone with keen knowledge of the arcane that freezes everything? Female. Cloak and dagger assassin? Female. Pyromancer? Guy. Witch? Female.
Sometimes I mix things up, like the paladin one. And sure, I also take personality into account which can switch it up.
Haven’t played mass effect so no idea about rhe “classes” in it.
I guess it is more following of people of certain topics that talk about them in twitter. Like someone follows few people abour their hobby, some politicians maybe, and people of other interests they have. I haven’t really gotten the idea that people use it to talk to friends per say, or follow them, but for sure people do that as well but maybe it is not the main point.
Though I could be spewing shit because I don’t use it. Just the vibe I’ve been getting over the years.
I don’t get it. Actually well working AI NPCs sound fucking amazing. To have an actual conversation about anything in the game by typing your questions? That’s like the wet dream of an RPG.
Have writers write the background info, some lore stuff, “books” about stuff in the game etc.
I want to have a conversation with all the NPCs and choose from four premade questions about a quest I am on.
And yes, obviously they have to work well or they’re extremely awkward and anti-immersive.
Man, and it works great. It is waaaaay more common to find good answers to a question from a bunch of randoms on the Internet than trying to get an actual answer from a random website. Sometimes you find bs but you can usually quite quickly filter it out, and it gives a good basis from which to then continue to search on the topic.
I kinda agree. I understand the idea, just like the article states, environment is often a very important side in a battle. But i feel like this would’ve been a great time to figure out something else that other RTS games cannot do or haven’t done. Shift the focus elsewhere. And especially when they mention that the macro scale decisions are more important, focus on that.
This is interesting but issue with Perovskites is not efficiency per say, but it is degradation. It is a cool tech but their lifetime is measured in months and not decades. Here’s one study for instance. Perovskites might be where the tech is going but there are still major hurdles to overcome.
The materials that do the absorption are not effective across the entire wavelength spectrum of the sun. They can only absorb at a certain wavelength range, but the spectrum range of the sun is very wide.
Edit. Also other reasons, like recombination rate where the photon hitting the panel generates an electron-hole pair which is then collected and used for evegy, but electrons and holes tend to want to recombine, after which we cannot use them for evergy. We want this rate to be zero, but it never is, it is a probabilistic process. So even if you can absorb everything, you can’t utilise everything you absorb.
I’m confused. The article states that the monthly feed is to remove ad targeting, which I assume means no ads. It does NOT say that they won’t collect data on you, just literally that they won’t use the data they collect to give you ads.
So this has nothing to do with opting out of data collection, just opting out of ads? That’s the feeling I get from the article.
Edit. Well I guess I’m not confused, this is also the wording the Meta’s post uses and definitely says nothing about opting out of data collection.
Everyone’s favorite chatbot can now see and hear and speak. On Monday, OpenAI announced new multimodal capabilities for ChatGPT. Users can now have voice conversations or share images with ChatGPT in real-time.
Audio and multimodal features have become the next phase in fierce generative AI competition. Meta recently launched AudioCraft for generating music with AI and Google Bard and Microsoft Bing have both deployed multimodal features for their chat experiences. Just last week, Amazon previewed a revamped version of Alexa that will be powered by its own LLM (large language model), and even Apple is experimenting with AI generated voice, with Personal Voice.
Voice capabilities will be available on iOS and Android. Like Alexa or Siri, you can tap to speak to ChatGPT and it will speak back to you in one of five preferred voice options. Unlike, current voice assistants out there, ChatGPT is powered by more advanced LLMs, so what you’ll hear is the same type of conversational and creative response that OpenAI’s GPT-4 and GPT-3.5 is capable of creating with text. The example that OpenAI shared in the announcement is generating a bedtime story from a voice prompt. So, exhausted parents at the end of a long day can outsource their creativity to ChatGPT.
Use your voice to engage in a back-and-forth conversation with ChatGPT. Speak with it on the go, request a bedtime story, or settle a dinner table debate. Sound on 🔊 pic.twitter.com/3tuWzX0wtS — OpenAI (@OpenAI) September 25, 2023
Multimodal recognition is something that’s been forecasted for a while, and is now launching in a user-friendly fashion for ChatGPT. When GPT-4 was released last March, OpenAI showcased its ability to understand and interpret images and handwritten text. Now it will be a part of everyday ChatGPT use. Users can upload an image of something and ask ChatGPT about it — identifying a cloud, or making a meal plan based on a photo of the contents of your fridge. Multimodal will be available on all platforms.
As with any generative AI advancement, there are serious ethics and privacy issues to consider. To mitigate risks of audio deepfakes, OpenAI says it is only using its audio recognition technology for the specific “voice chat” use case. Also, it was created with voice actors they have “directly worked with.” That said, the announcement doesn’t mention whether users’ voices can be used to train the model, when you opt in to voice chat. For ChatGPT’s multimodal capabilities, OpenAI says it has “taken technical measures to significantly limit ChatGPT’s ability to analyze and make direct statements about people since ChatGPT is not always accurate and these systems should respect individuals’ privacy.” But the real test of nefarious uses won’t be known until it’s released into the wild.
Voice chat and images will roll out to ChatGPT Plus and Enterprise users in the next two weeks, and to all users “soon after.”
You don’t say.