Jump to content

HappyPolygon

Premium Member
  • Posts

    1,912
  • Joined

  • Last visited

  • Days Won

    97

Everything posted by HappyPolygon

  1. What does the ECC stand for ? (when talking about GPU Ram)
  2. @Daniel Seebacher i've been using Vue for a long time but I need to know for sure. (vue 2016) When I use Vue as a plugin in C4D is it true that I can render a Vue scene using the native renderer of C4D or did I misunderstood that from the manual ? One more thing before the new release... Please design (well, not you, the team, you know) some nice icons for the Vue scene objects in the C4D object manager. Those gray squares look so broken...
  3. In the first attempt in screenshot I thought it created point-like circles. I had in mind something like this guy did here. Which doesn't seem to be too heavy. But it also lacks the recursive inflation of the circles... It kinda bugs me that the AI creates flawless Blender code but again maybe I should reconsider how I ask it to do things, like this guy here did using explicit function names in his queries.
  4. My spark for environmental modeling was always Magic the Gathering Land cards. Unfortunately I don't possess and probably never will the amount of RAM needed to populate my environment with the appropriate amount of foliage and flora...
  5. It's OK. I think it's not the first time E-on skips a year before releasing the next major version.
  6. Oh they have a phrase to describe Putin... That's weird, shouldn't the compiler throw a library call error to that like "no SPLINETYPE_LINE class/attribute/function call found in module c4d" though ? I didn't get an error. Great post SHARPEARS this is how one can start learning C4D python. Not just with examples but also counter-examples and side-notes explaining the reason it is done as it is and not the other way. I guess this is my training model. The so called "hands-on real-problem solving" method or "project-based". As for the rest of the algorithm, for which I am tapping my feet with excitement to see, I have two ideas of how the algorithm should go...
  7. No, still waiting for it ! I checked yesterday. How are they supposed to name the new release 2022 if it's released in 2023 ...
  8. I think this behavior was either hard coded to the AI or an inevitable side-effect. And this is reflected in this answer. So it's purpose is to assist and nothing more. And it is coded to recognize as the only goal this assistance. It has been trained in this huge yet limited pool of knowledge and released to assist based on that knowledge. So from the AIs' point of view it assumes that what ever is being asked can be answered with the provided base knowledge pool. Because that was the reason for the developers to make it. So when it answers it does it with a sense of complete authority and all-knowingness because that's how it was designed. It was designed to assist, that implies an act of competence not doubtfulness. It wasn't design to question the training knowledge or contemplate if the answer given was wrong as a self-validation/improvement step (maybe that would lead to an endless feedback loop or eventually always end to a "I don't have enough data to answer correctly" type of response). The developers gave it the ability to remember previous parts of the conversation in order to be able to stay on topic without derailing and that also meant that the user could also point out logical inherences made from AI. So re-feeding the answer to the AI can improve the answer to a point and of course if there is a logic cornering the AI will apologize. I guess it has some deductive logic in order to make new knowledge not explicitly expressed in the training knowledge in order to answer questions in a more general/abstract way but it sill fails to "see" and use information as we do. You could say it's an 8-year-old child that has read everything including the Oxford dictionary in order to hold interesting conversations but it lacks the experience of connecting input information in the right way with the base training knowledge. In other words logic to lead to wisdom. I've seen some funny memes depicting exactly that king of inability But it has great language skills Mathematics and programming are closely related as both contain logic. Most people are not good in math and logic. Language was the first great intelligence step, then writing and then mathematics. I guess it will be the same with AI too.
  9. You're a living CG encyclopedia Fritz. I wish I had a professor like you. Happy new year.
  10. Sorry for any misunderstanding. The VRAM I was referring was the OS technique of temporarily transferring data from random access memory (RAM) to disk storage. The OS VRAM can be expanded but the GPU RAM cannot. This is probably what JAEE also thought. Does that mean that all VDB elements (imported and internally generated) although stored in HD cannot be used partially (specific frame at a time) unless fully loaded exclusively to the GPU VRAM ? Is there any trick to make it load them in the usual RAM (or not at all but still be part of the project file and retrieve only the essential frame for rendering) like disabling any previews or using XRef ?
  11. I'm afraid not. If you open the Task Manager and see the graphs of the Performance tab reach the ceiling there isn't anything you can do. Pyro probably doesn't use VRAM but the GPU RAM. Fritz is the technical expert here and suggested exporting to VDB.
  12. If I needed to do something like that I would build my scene the usual way and when that point of complexity is reached I'd just use a filter on the OM to filter out cameras/objects/lights etc and just grab the results and put them in respective layers.
  13. Haven't seen any option like this. Actually I searched for something like this because I assumed it had a default VRAM limit for cached simulations.
  14. For me, for this snippet, the main element that needs fixing is the camera movement. It feels like an old game cinematic. It stopped too suddenly in the first scene and the second was too static. Possible solutions for the first scene: Use ease-out/deceleration movement. You can cut before the camera comes to a complete halt. But it should be on the move too slowly to to so. Possible solutions for the second scene: Use slow-zooming movement towards the object. Use targeting to keep the object on the center of the view. Use both of the above. So the main problem (as I see it) is cinematographic/directorial I suggest you study all of the videos in this amazing channel but start from this video about camera movement and this video about camera angle and built your knowledge from there. The second problem is compositional. The floating particles and cross are all in front of the sword shuttering any perspective or depth of the sword in the center of the scene.
  15. Please edit your post to mention the version of C4D you're using, maybe your the graphics card too. If you can't upload a screenshot/scene file try posting a link to an external repository so we can open your scene like WeTransfer, Dropbox etc. Is this a recent change of behavior or was it like this from the moment you installed C4D ?
  16. Here's a Substance Designer solution. Here's some PS and Gimp solutions. Here's an Agisoft Metashape Professional solution. Some AI online tool I found. Some other AI app. This app does not claim to remove shadows but might help you in other ways.
  17. Where's that screenshot from ? Did they do it in PS with a clone brush ?
  18. Maybe you don't have enough VRAM or just RAM. Try saving the simulation in .VDB files of 65 frame intervals.
  19. I can't even imagine how they trained it ... Did they just feed it the SDK manual ? It combines other Python related knowledge with snippets of C4D code on the internet ? Still amazing.
  20. The disadvantage is that you need a high density polygon surface to hide any pixelated effects... And this can be heavy for a real-time calculated vertex map. If I were you I'd make a special surface just at the areas were the cars pass to avoid having unused polygons or vertex areas.
  21. Even the AI is using a more recent C4D than me ... 🤣 Oh well... I can't do anything but wait till summer to get back to my R25 laptop...
  22. So it's been a week since I saw one of those 2-minute papers video ... and he's been asking the new ChatGTP all kind of weird stuff to test it's capabilities in math. I didn't really pay much attention. Until a few minutes ago when I stumbled across this video where this guy is asking it to script things in Blender... And I'm like "Let's see how biased this little AI is. I bet it doesn't even know what "C4D" means".... Ooohh boy ! This thing does know how to script ! I don't know shit about C4D python calls but I know there is no intersection checking mechanism when I don't see one. So I asked it for one... Anyway ... I did check the first algorithm in C4D using the Python Generator but nothing came out. The compiler found no syntax errors so I don't know what it did wrong since the only Python I've written in C4D in in the XPresso and only to evaluate simple numerical functions... Maybe it's been trained for older or newer versions of what I used (R20). Here's the script for anyone wanting to test it or correct it: I didn't want to end my game so quick, so I asked it something else... Still nothing in the viewport... OK one last time Well the script doesn't work but I'm impressed that it knew how to make the most common projection of a hypercube on the 3D plane... EDIT: I noticed that the spoiler boxes did not hold the code I put yesterday so I asked again the same questions. The first two answers were almost identical (the first one actually now has an intersection check) but on the third one this time it didn't extrude the cube but used a 4D vector to scale it along the 4th dimension... I'm not sure if this is a valid action to do in C4D... Even if it is, it called for a regular 3D cube, scaling it to the 4th dimension doesn't make it a hypercube. Have fun ! https://chat.openai.com/chat
  23. El Barto... I think he goes well with the spray can of your previous artwork.
×
×
  • Create New...