Jump to content

HappyPolygon

Registered Member
  • Posts

    1,898
  • Joined

  • Last visited

  • Days Won

    94

Everything posted by HappyPolygon

  1. 1) Move the scene "bit away from the" camera (can't be done if the animation path is not close to a straight line but contains turns) 2) Change "a bit" the camera Field of View (expect some slight differences) 3) You can edit the animation path on viewport by selecting the Object Mode or Animation Mode. 4) You can change an Animation Path to a Spline and vice-versa from the TimeLine window Menu. With a Follow Spline Tag you assign the spline to a new Camera as an animation Path.
  2. A very interesting video was posted last month, and this time not by 2-minutes-papers. It's a new research paper on acoustics with applications in game design offering dynamic audio simulation based on surrounding materials. Like in real-world, the acoustic waves get distorted depending the material on surfaces changing the amplitude of a noise.
  3. I was amazed how realistic the return of Egon was on the last Ghostbusters movie. The movie was released very close to the Mandalorian featuring the deepfake of the young Luke Skywalker which although impressive still had that uncanny-valley look. I was wandering if Egon was also a deepfake. He wasn't. MPC did amazing work recreating a CG aged Egon. And fortunately the producer did not go too far using any AI to reconstruct any dialogues avoiding ruining the perfect illusion of the CG.
  4. HappyPolygon

    Skybox Lab

    New update ! Also new styles Cartoon (90's cartoons) Storybook (looks a bit anime) Holographic (Tron style) Super Art (like watercolor bleed) Claymation (nice depth of field)
  5. Have you increased the Luminance to more than 100% ? 100% means that the object is just bright enough to not be obscured by other environmental absence of light.
  6. What kind of Bullet Dynamics interaction with characters are you after ? Ragdoll ? Soft body - skin deformations ? Automatic animation conformation to environment elements ?
  7. Just found this! https://vcai.mpi-inf.mpg.de/projects/DragGAN/
  8. While writing the previous one I had a better idea. I hacked a bit the procedure to make it compatible to R21's capabilities. go 2.c4d You won't have to manually keyframe every move but you will have to manually draw the paths. Procedural angular movement was an old problem I've been asking around here even to Rocket Lasso. For unknown reasons the Cell Noise in a Shader Field used under the Force Field won't do the trick. Someone (I think Srek) had written a Python Script for this Py Random Walker.c4d but it won't help you much in this case.
  9. Alright, There is a better and easier way to do it BUT it requires C4D > R23. The following project file is completely useless because it was made in R20 but you can use it in more recent C4D versions to make it work by adding the following: go.c4d Enable Fields on the Matrix Weight Tag. Insert the Box Field in that tag's Field List. Expect to work properly when hit play The concept: The Box Field changes the Weight of a matrix element from 0 to 100, affecting its visibility (via the Plain Effector). This still works in R20 but their is a crucial difference. After the field has moved away from the matrix element its weight automatically drops to 0 again making it invisible. In more recent versions the inclusion of Fields in the Matrix Weight Tag enables the use of Field Layers. Using the (by default existent) Freeze Effect the weight of the matrix element won't loose its weight value after the passing of the Box Field.
  10. I would try to recreate this using particles with 0 speed, no death rate, no forces no nothing else. Since each pawn spawns from a previous one then having a particle emitter traveling on a predetermined path giving birth to a particle in constant time intervals makes more sense. To give the impression of a pawn giving birth to other pawns just make a pawn a child of the emitter and have it follow the emitter. The hardest part will be to control the speed of the emitter to coincide with each passing through the center of each birth point, since spawning is strictly determined from time and no other C4D mechanic can influence it (Fields, Effectors etc.). While writing this I had a better idea. Give me some time to test it,
  11. If the Displacer Deformer does not work for you try the SubPolygon Displacement Channel from a Material. The only dissadvantage is that you won't have a viewport preview, but that's also an advantage. You can use a non-rendered proxy of the object using the Displacement Deformer if you need to take close-ups and always have a general awareness of the volume.
  12. I have been asking Maxon for the last couple of years for a Shortest Path algorithm. Someone can have a look to a recent post about a new plugin
  13. So how did you do it ? Is it particles ? Is it splines ? At least share the file or make an export to obj for the OP to use.
  14. Your approach has two flaws: It cannot scale down to 0 polygons effectively when they are connected to each other. In order to do that you need to know exactly how much far away each point is from the center of mass so you can enter half of that as a value to the Plain Effector. In your scene if you hit render you can still see the scaled-down polygons. Estimating this value is very hard 2. Your second tube was neither round or straight as the first one. This is a serious difference from the first object. A polygon or point can be manipulated relative -to many things. Usually they scale relative to their normals. You set polygons to move towards the Z-axis by a certain value. So the affected polygons (that also are connected to each other) move towards the center because their normals point to the center due to their arrangement (circular). In order to both move and stay connected to each other they squiz themselves before reaching their destination. So scaling was a by-product in your case. You should be scaling their points. Now moving to the hexagonal tube, each polygon's normal did not point towards the center. Flat surfaces of many polygons point collectively to the same direction. That's why you witness that weird brake on the corners when trying to move them along their local Z-axis (normal axis). As mentioned before polygons can be scaled relative to many things. If your object does not have a common center to scale down towards along all of its mass the trick by providing a single value won't work. So you can try to make the polygons scale relative to something else. This time the Effector itself (pivot point). do that just change the Deformation to Point, Transform Mode to Absolute and Transform Space to Effector. Move the Effector itself in the middle along any point of the object and also move the Timeline. You will notice that the object vanishes by scaling down but it does so ineffectively. The reason is this: a. The effector is stationary. As the Field passes through so must do the Effector. So every point wants to scale (points do not actually scale) towards the effectors position. b. the object gets deformed in many unwanted ways. This is because the polygons are connected and want to remain like so. So an Effector is not a good way to "vanish" an object in the way you'd like to (by warping it like if it was sucked by a wormhole). The only way I've seen Effectors to make objects disappear is with the use of the PolyFX. PolyFX will disconnect all polygons so other Effectors can manipulate them discretely. In your case scale them to 0. The problem with that is obvious: Your mesh will break up, even with Fields disjoining polygons at the latest of stages will most of the times not be a plesant visual effect. None of the above is a problem with a deformer. A deformer will move each point of a mesh according to a differential space formula. So essentially we still have a scaling of polygons as a by-product but it's not that obvious because we don't explicitly deal with movement, scaling or rotation.
  15. OK I've finally got my way around the numerous disappointments and came up with this: lattice.c4d to open it you need the best plugin in the world Network.7z from Noseman There is one more step you can do to make it a bit more parametric : use the Segment Capsule from SceneNodes to be able to control the number of intermediate control points of the vertical splines. Now that I look the photo again I feel like an idiot because the structure has nothing to do with what I just upload... Anyway see also this branched structure.c4d Only play with the Seeds inside the cloners, do not increase the number of clones more than two... it gets crowded. And a spin-off which I think is even better branched structure with network.c4d this also needs the plugin
  16. If you need to scale then you scale. Using the Plain Effector you did not scale, you where moving all points towards the Z-axis. linear field.c4d
  17. Well, my first idea didn't work because the results were not good and because the C4D dynamics system do not allow for perpetual motion. Unfortunately due to a bug I can't provide a better solution I thought of. And of course I can't even test the rope idea because I don't have R2023
  18. I think what you could be googling is "organic lattice", "mycelium" and "porous structure".
  19. I can think of two solutions. One with branching and one without. They easy one is without branching just make a null bounce inside a cube and trace it. The other involves the same idea but using TPs for branching out once a while. I'll demonstrate the first one in a few hours If I had 2023 I would also try out some tangled rope dynamics with high collision radius.
  20. It does mention multiple views but all demonstration videos show amazing results with just a front view. I think all those videos are just publicity stunts and total lies. Providing more than one picture makes this "hidden AI" no different than a photogrammetry app. The fact that the results are so close replicas to the original image makes the app even more fishy. So far all AIs produce quite deviant results from what they are fed as input. You can provide a 2D image to DreamStudio or similar and set it to have little impact on the image but the paradigm is give an image take an image, and generally best results happen when both input and output are of the same type (with the exception of text to voice and vice versa) this is totally different. NVidia's best AI 3D generations even produce textures. Could it really be that what Kaedim's AI just isn't trained to produce textures ? Even Blender projects the generated image on the geometry. It's not even that much of a cost for their "Quality Control team member" to project the provided image... if their results tend to be so close to the depicted image this should be no problem at all. I just wonder how much one has to wait after pressing the "generate" button. How long would a skilled modeler need to clean that House from AI garbage ? If the time needed is not less from what a skilled user would need to actually model the concept art then it's totally a scam and a waste of money.
  21. I controversial because I'm having a hard time to accept what I see. If this was true then 2-minute papers would certainly make a video about it. Rumors claim that it's not really an AI but a real person modeling on-demand behind the API. Does anyone care to try and give us a review ? This is the most expensive AI app I've encountered so far billing 6$ for a trial instead for a free trial. https://kaedim3d.com/ The only competitor is https://make3d.app/?ref=theresanaiforthat
  22. What ? You want to screen record C4D using C4D ? No C4D does not provide any screen capturing capabilities other than color picking. I think here is a way to record user actions but for animation purposes not screen recording. There's probably some good screen recording app that will record only when pressing mouse buttons or keys but I'm not aware of any.
  23. This is a serious question regardless how stupid it may sound due to my complete lack of professional experience in CG. At least in my Country from what I've been hearing amongst graphics designers in the 2D realm (Corel, Adobe Designer, Photoshop), they get very frustrated when the client comes to their office asking for a poster, flyer or busyness card giving them a time-worn out card as a template for the designer to recreate or do something similar and consider it for granted that the designer will be able in no time and without extra cost redesign they logo from the card itself or even from a low-res image from their FB page. (is this also common in your place ?) Does this also happen in 3D? Could a similar company approach an office for a render of their product (already in market and not still in design) without giving them the CAD or CAM product file? I've never seen a bow like this before... I thought all bows should comply with standard regulations to be eligible for participation in the Olympics. Unless of course it's not meant for competitive sports.... 🦌🦌🦌
  24. I have R20 and people sharing their projects with 2023.2 work just fine if they don't contain any later generators like Pyro, Mesher, SceneNodes or Rope/Cloth simulations. You should move on soon though because there's a pattern where every 5 years the compatibility with older versions is cut off due to significant changes on many features.
  25. I get what you mean. In my mind all these "alternative" roads are being perceived as "levels of quality" in the photorealism context. Standard Render/Standard Material < Physical Renderer/Reflectance Channel < RedShift Material Nodes are completely different. I consider them as a low-level procedural shader construction toolkit. I would advice beginners to learn those systems in this order to understand bit-by-bit how CG lighting works and build their intuition rather than jumping all at once on RedShift or Physical.
×
×
  • Create New...