-
Posts
1,898 -
Joined
-
Last visited
-
Days Won
94
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by HappyPolygon
-
Taking cues from the technological advancements of the real world, Studio C built visuals that suggested functional consistency and iterative improvement of Matrix technologies. This approach helped shape the screen designs and UI for the Mnemosyne hovercraft seen in Resurrections. To create the UI for the Mnemosyne, the team looked to its predecessor, the main hovercraft featured in The Matrix and The Matrix Reloaded, for inspiration. Studio C used Cinema 4D and Redshift to bring the Mnemosyne screens to life, displaying realistic schematic renders, medical scans of 3D organs and vitals, 3D maps inside and outside the Matrix, and photoreal renders of the head jack, as well as data widgets. https://www.maxon.net/en/article/maxon-fuels-the-matrix-resurrections-reimagined-tech?utm_campaign=MaxonFuelsTheMatrix&utm_source=twitter&utm_medium=social&utm_content=1648581189&fbclid=IwAR04CFLSwqINvZAbC_7St1BWbhnDJXE94lMl1Et2t5A47s0HgZPvXs6I7w0
-
Why does my object move when I am trying to modify the axis?
HappyPolygon replied to a topic in Cinema 4D
Or use the Geometry Axis node. geometry axis.c4d -
-
That is still an air bubble though. All of these games are now stricktly internet dependent. What will happen if the game company closes the servers/ gets closed/ disscontinius the game ? Nothing lives for ever. If the game dies, you wont be able to do anything with the item. The whole meaning of the "property" and "ownership" get totally corrupted./ At least with MTG you get to keep the cards. For the MTG Online I don't know what the rules are. I don't know if you can trade your cards. And I think the digital cards are 1-1 with the real ones in terms of price. But in MTG Arena, you basically own nothing.
-
NFTs or Non-Fungible-Tokens have hit the art industry like a comet. If you don't know what NFTs are take a look here And learn how it works here There are a ton of legislation holes that should be covered immediately. What would it mean for mainstream art ? What about fraud ? When should someone be legally acknowledged as the creator ? One of the biggest criticisms about NFTs is their CO2 footprint. The heirs of Pablo Picasso are on a dilemma about the future of their grandfather's legacy. A big part of the NFT community is comprised by individuals that use AI to create images. Due to the infantile nature of the DeepDream algorithm the produced images are very surreal making them appealing to the eye. Never the less, to make such images is not that difficult if you spend some hours to experiment with them. The DeepDream and later AI neural networks rely on heavy GPU work. NVIDIA GPUs to be exact and not just any GPUs. The algorithms need a lot of GPU VRAM and a GPU with less than 8 GB of VRAM won't work at all. So people that want to profit from AI art need to produce high-quality images, this is only possible using really expensive 16GB GPUs. But the costs of the GPUs are very high. If you wonder why they are high you'll probably guess wrong. It's not the shortage of chips any more ! Websites that promote the AI generated art are spawning like zombies everywhere ! You don't have to own a heavy-duty GPU, you can rent a rack of them for some dollars per frame ! Companies like these are thinning the market from latest GPU models to profit from the NFT hype. Remember about the CO2 footprint ? Crypto uses a lot of energy. NFTs use the cryptocurrency technology to be produced... But on top of that in order to make the AI art in the first place you also have to dedicate hours of burning GPU time. People use servers for days to create videos and images worth looking just by pressing start. It's like rendering, but with no effort and useless. The U.S. Copyright Office has reached a verdict some may not like https://dot.la/creative-machines-ai-art-2656764050.html. But this is only in U.S. If I had to make a list of priority problems involving NFTs I would put theft on 1st place without second thought ! Bots are roaming the internet copying images from sites everywhere and mint artworks for someone else's. ‘Huge mess of theft and fraud:’ artists sound alarm as NFT crime proliferates Artists report discovering their work is being stolen and sold as NFTs NFT Platforms Have Found Their New Worst Enemy: Twitter T-Shirt Bots How bots are stealing artwork from artists on Twitter WATERMARK EVERYTHING ! EVEN YOUR SKETCHES ARE NOT SAFE ! Even if you make pictures for fun and don't sell, someone can make a profit out of your work and you won't know it ! Make your art theft-proof with these steps: Big and complex watermark. Any watermark even the word VOID can ruin the plans of someone with bad intentions. As long as the watermark is covers most of the picture you are safe. Publish in small resolution. Keep your audience's interest providing cropped parts of your artwork to satisfy curiosity on details. If you own a website use scripts that disable right-clicking, that will stop a small part of thieves. If you own a website upload your image in non bitmap format using javascript plugins. Post your work on sites that use NFT protection, so far I only know DEVIATART, and that service is only for premium members.
-
DUNE (no need for introduction here) Wylie Co. visual effects breakdown of the Hunter Seeker Hologram sequence Death On The Nile Crime, Drama, Mystery Storyline: Belgian sleuth Hercule Poirot's vacation aboard a glamorous river steamer turns into a terrifying search for a murderer when a picture-perfect couple's idyllic honeymoon is tragically cut short. Set against an epic landscape of sweeping Egyptian desert vistas and the majestic Giza pyramids, this tale of unbridled passion and incapacitating jealousy features a cosmopolitan group of impeccably dressed travelers, and enough wicked twists and turns to leave audiences guessing until the final, shocking denouement. Trivia: Trailer: TURNING RED Animation, Adventure, Comedy, Family, Fantasy Storyline: Mei Lee (voice of Rosalie Chiang) is a confident, dorky 13-year-old torn between staying her mother's dutiful daughter and the chaos of adolescence. Her protective, if not slightly overbearing mother, Ming (voice of Sandra Oh), is never far from her daughter - an unfortunate reality for the teenager. And as if changes to her interests, relationships and body weren't enough, whenever she gets too excited (which is practically ALWAYS), she "poofs" into a giant red panda. Trailer: Trivia: Behind the Scenes: THE PROJECT ADAM Action, Adventure, Comedy, Sci-Fi Storyline: Adam Reed, age 12 and still grieving his father's sudden death the year before, walks into his garage one night to find a wounded pilot hiding there. This mysterious pilot turns out to be the older version of himself from the future, where time travel is in its infancy. He has risked everything to come back in time on a secret mission. Together they must embark on an adventure into the past to find their father, set things right, and save the world. As the three work together, both young and grown Adam come to terms with the loss of their father and have a chance to heal the wounds that have shaped them. Adding to the challenge of the mission, the two Adams discover that they really don't like each other much, and if they are to save the world, first they need to figure out how to get along. Trailer: VFX: SEVERANCE Drama, Mystery, Sci-Fi, Thriller Storyline: Mark leads a team of office workers whose memories have been surgically divided between their work and personal lives. When a mysterious colleague appears outside of work, it begins a journey to discover the truth about their jobs. Trivia: Trailer: Intro: VFX shots: https://www.behance.net/gallery/138871275/Severance-Official-Intro-Title-Sequence-APPLE-TV Made with: SideFX Houdini, Maxon Cinema 4D, Zbrush, Nuke, Pixologic Zbrush THE BOOK OF BOBA FET Action, Adventure, Sci-Fi Storyline: The legendary bounty hunter Boba Fett navigates the underworld of the galaxy with mercenary Fennec Shand when they return to the sands of Tatooine to stake their claim on the territory formerly ruled by the deceased crime lord Jabba the Hutt. Trailer: VFX Breakdown:
-
VIDIA Omniverse Ecosystem Expands 10x, Amid New Features and Services for Developers, Enterprises and Creators NVIDIA Omniverse Enterprise is helping leading companies enhance their pipelines and creative workflows. New Omniverse Enterprise customers include Amazon, DB Netze, DNEG, Kroger, Lowe’s and PepsiCo, which are all using the platform to build physically accurate digital twins or develop realistic immersive experiences for customers. Enhancing Content Creation With New Connections and Libraries The Omniverse ecosystem is expanding beyond design and content creation. In one year, Omniverse connections, ways to connect or integrate with the Omniverse platform, have grown 10x — with 82 connections through the extended Omniverse ecosystem. New Third-Party Connections for Adobe Substance 3D Material Extension and Painter Connector, Epic Games Unreal Engine Connector and Maxon Cinema 4D will enable live-sync workflows between third-party apps and Omniverse. New CAD Importers: These convert 26 common CAD formats to Universal Scene Description (USD) to better enable manufacturing and product design workflows within Omniverse. New Asset Library Integrations: TurboSquid by Shutterstock, Sketchfab and Reallusion ActorCore assets are now directly available within Omniverse Apps asset browsers so users can simply search, drag and drop from close to 1 million Omniverse-ready 3D assets. New Omniverse-ready 3D assets, materials, textures, avatars and animations are also now available from A23D. New Hydra Render Delegate Support: Users can integrate and toggle between their favorite Hydra delegate-supported renderers and the Omniverse RTX Renderer directly within Omniverse Apps. Now available in beta for Chaos V-Ray, Maxon Redshift and OTOY Octane, with Blender Cycles, Autodesk Arnold coming soon. There are also new connections to industrial automation and digital twin software developers. Bentley Systems, the infrastructure engineering software company, announced the availability of LumenRT for NVIDIA Omniverse, powered by Bentley iTwin. It brings engineering-grade, industrial-scale real-time physically accurate visualization to nearly 39,000 Bentley System customers worldwide. Ipolog, a developer of factory, logistics and planning software, released three new connections to the platform. This, coupled with the growing Isaac Sim robotics ecosystem, allows customers such as BMW Group to better develop holistic digital twins. New Omniverse connections Adobe Substance 3D Material Extension: Import Substance 3D asset files into any Omniverse App. Adobe Substance 3D Painter Connector: Apply textures, materials, and masks or UV mapping onto 3D assets with Adobe Substance 3D Painter, releasing March 28. Unreal Engine 5: Send and sync model data and export Nanite Geometry to Omniverse Nucleus. e-on VUE: Create beautiful CG environments including skies, terrains, roads, and rocks. e-on PlantCatalog: Export a plant, enable live-sync, and edit in real time. e-on PlantFactory: Create ultra-realistic, high polygon plants. Maxon Cinema 4D: USD is now supported. Use the app in a connected workflow with OmniDrive. Ipolog: Perform material provisioning and production logistics for manufacturing planners. LumenRT for NVIDIA Omniverse, powered by Bentley iTwin: Allows engineering-grade, millimeter-accurate digital content to be visualized on multiple devices and form factors. Omniverse Enterprise Features and Availability Broadens New updates are coming soon to Omniverse Enterprise, including the latest releases of Omniverse Kit 103, Omniverse Create and View 2022.1, Omniverse Farm, and DeepSearch. Omniverse Enterprise on NVIDIA LaunchPad is now available across nine global regions. NVIDIA LaunchPad gives design practitioners and project reviewers instant, free turnkey access to hands-on Omniverse Enterprise labs, helping them make quicker, more confident software and infrastructure decisions. Latest Omniverse Technologies and Features Major new releases and capabilities announced for Omniverse include: New Developer Tools: Omniverse Code, an app that serves as an integrated development environment for developers and powers users to easily build their own Omniverse extensions, apps or microservices. DeepSearch: a new AI-based search service that lets users quickly search through massive, untagged 3D asset libraries using natural language or images. DeepSearch is available for Omniverse Enterprise customers in early access. Omniverse Replicator: a framework for generating physically accurate 3D synthetic data to accelerate training and accuracy of perception networks — now available within Omniverse Code so developers can build their own domain-specific synthetic data engines. OmniGraph, ActionGraph and AnimGraph: major new releases controlling behavior and animation. Omniverse Avatar: a platform that uses AI and simulation technology to enable developers to build custom, intelligent, realistic avatars. Omniverse XR app: a VR-optimized configuration of Omniverse View that enables users to experience their full-fidelity 3D scenes with full RTX ray tracing, at 1:1 scale, coming soon. New versions of Omniverse Kit, Create, View and Machinima.
-
Nvidia has released Canvas 1.2, adding support for style variations. Users can choose from 10 variants of each readymade visual style, making it possible to adjust the look of an image while preserving the overall theme. Style images and variations are also now saved in project files. Based on Nvidia’s GauGAN 2 web app, Canvas is GPU-accelerated via the Tensor machine learning cores in Nvidia GPUs, and requires a RTX card. However, if you’re using an older GPU – or an AMD, Intel or Apple processor – you can still try the underlying technology online, via Nvidia’s GauGAN 2 web app. It isn’t as slickly presented, but it covers a wider range of landscape types, including buildings and other man-made structures, and also supports natural text input. System requirements Canvas is available in beta for Windows 10. You need a Nvidia GeForce RTX, Titan RTX, Quadro RTX or RTX GPU with version 471.68+ of the Nvidia GeForce, Studio or Quadro driver to use it. It’s a free download.
-
What ? it's a subscription plugin ? lame. I've never tried Signal, is it that hard to make an XPresso oscillation rig ?
-
No,no,no,no, I've heard there was a book about noises that developers were using in Maxon to develop the noises 4D has. My eyes have seen alien-level of the Borg kind math printed by Springer only read by the authors that wrote them. I really have no idea how this field works. What books does someone have to read to become Srek or Fritz ? This isn't a CG specific problem, the acquirement of accumulative knowledge and advancement in the sciences is a long lasting problem I haven't yet understood how to solve. I once told my professor (physicist) "Mr S. when you were a student things that I was studying in high-school like derivatives and integrals you got to know in University, the human advancements in sciences are constantly broadened pushing the educational system to more and more compact curriculums, your generation was lucky because you got to get in the level of knowledge you are now after many years of practice and reading papers from your colleagues, for me, In order to get a job like yours I have to acquire the same level of knowledge you now have in your 50s at my 30s, where is the limit of education compactness in order to reach the new boarders of knowledge and push a little bit yourself before reaching a non productive age?" He didn't answer, he likes it when I ask things like that. We were having dinner together one day when he came to my hometown after I graduated. That's where I asked him that, I miss the old academic conversations when I was surrounded by people with greater intellectual than mine, I had people to look up to.
-
I'd love to see something like this because for the past 20 years there hasn't been any new book about computer graphics that would go beyond the antialiasing algorithm in 2D graphics and phong shading in 3D graphics. Actually the most advanced 3D book I've encountered was about light mapping (I don't want to go that far without first learning about Sub-surface scattering or other geometry/structural methods). Anything else is prehistoric algorithms. All modern structural methods concerning UV mapping and animation are hidden in scientific papers without the well digested contents you would find in a book. Even old-school 90s effects like watercolor and pastel are not present in signal processing/2D graphics books I've encountered, they talk about edge detection and blur and then you've reached the end of the book. I'm still looking for the title of that Maxon book about noises (I hope it's not only in German) and I still don't know the technology behind ZBrush's polygon encoding... To be fair I've seen a lot of modern GC books but they were about real-time graphics that I'm not really fond of (I thing it was a series of 3-5 books called CG Gems or something like this).
-
Very happy to hear about that Srek. Do you use the old hack of exporting as alembic to make the SceneNode object appear in the Object Manager or am I missing a new way to do that ? I noticed you output to a port while I have that node to communicate to the viewport ... Unless the Tool Hole is a node group connected to other nodes not visible in the screenshot. Oh wait, you made a Capsule Object right ? Will this be the standard way to have a manifestation of the generated geometry ? (so far my use of capsules is limited to making them children of objects and use them as modifiers and when I hear about capsules my mind goes straight to that kind of use)
-
I think I have mentioned to Srek on CGTalk, as soon as Scene Nodes were first introduced, in the context of a productive criticism, the importance of good, user friendly nodal system that can support Kit Bashing. I'm not sure if I used correctly the term as many artists refer to it as placement of small pre-modelled parts, where I had in mind a mid-sized scale kit bashing between that and the products of https://kitbash3d.com/ . Essentially you have the finalized models or modular parts that when procedurally placed using a constraint-and-condition-oriented system can produce unlimited variety of style-consistent, scalable models for scenes. My example of what the final result of a system like that might look like was a screenshot from the sandbox game called Townscaper. I don't know how much they considered my suggestion due to its MoGraph resemblance. I don't know how feasible my suggestion was as it was too specific as a use case and too game-engine-like of a feature. I don't know even if it is already possible with the R25 version as I struggle with simple forms. As for the texturing... AFAIK, Blender uses procedural texturing on node-generated models. As for the fully node-based Houdini, I'm not sure even that uses nodes to UV unwrap and edit UV maps, maybe Igor knows more about this. I think the main reason for not having many tutorials for really innovative node structures is the same for not having many tutorials on how to code for C4D or other DCCs. I think it's only because of the nature of "experimentation" that these two share and the fact that those involved in these areas are far less than those using the standard modelling options. I've never seen a tutorial about how to program a fully functional calculator with GUI elements. Same applies for nodes. The factors contributing to this are: It takes a lot of time. The final video is going to be very lengthy, and the time required by the author to edit the video for final release is multiple of the video recording time. It needs a good tutor. It takes some serious skills to be able to have a descent tutorial with all the appropriate annotations and guidance from the narrator. It needs preparation. The bigger the project the more complicated it becomes, the more complicated it is the harder it is to replicate in a live manner. For the same reason it didn't got solved in one day, it cannot be deconstructed, analyzed and presented in much less time. It is a new "technology". Sometimes the area of expertise is fairly new and the amount of people involved in it are not that many in order for "tutors" to emerge from them. It's complicated. A tutorial is something that people may refer to as a video or a small series of videos that may not exceed 30 minutes. If a series of videos is comprised of 3 or more 45-60 minutes parts, it is then considered a seminar and can easily fit in the academic realm as a full course. Do you think you could do a full tutorial for people to teach them how you made one of your 2000-lines-of-code plugins ? And if you can, for what kind of expertise-level audience will it be for ? People that just started coding ? People that already familiar with the programming language ? People that are familiar with the C4D SDK ? (rhetoric questions) Let's take a recent "technology" from the IT field. DOCKER. How long did it take for people to start publishing books on how to properly use it ? It took 5 years for it to start appearing in post graduate courses. And to be fair, DOCKER is a quite robust platform. On the other hand Geometry Nodes have been around for two years and Scene Nodes for one. And both are still on development with many additions and changes being introduced quite often. So making in-depth tutorials with them that may not apply in the next application update or soon major release is not productive or profitable. To wrap up, what you ask is not application agnostic, as each company may develop its tools as they seem fit, and as you have already figured out by yourself the algorithms involved to do what you want are not existent. The main reason for that, I think, is the intertwined paradigms (classic and virtual mediums) of how thing should get done. Without the use of a computer the artist first builds the base model, then adds details, then finally paints it. These steps never changed priority. So when the tools became digital the developers kept that sequence and adapted everything to be dependent on that (except rigging, in classic mediums the rig precedes modelling, that's one part where the paradigms get intertwined). Nodes do not operate in a different way from the rest of the app, it's just an automated way to work with the classic tools, so far they seem to be the "macros" of the old apps like Excel. It may offer some low-level control but it is still limited by the internal procedures on how they handle information. If I understand correctly, you want to be able to delete a polygon assigned to a UV map, create a new polygon and assign the now lost UV coordinates to the new one ? A very raw description of an idea that can result in having a UVed human model change from quad topology to hex while maintaining the correct texturing, is that correct ?
-
Is Chris member of this forum ?
-
WOW, that OpenVDB technology is wild. I wonder if they are planning on using that only for preview purposes though.. I always rooted for a hybrid renderer that would automatically composite elements of Real Time Graphics and standard Ray Tracing... I've never heard of any renderer of that kind though...
-
What kind of shimmering is that ? The "Naaaaaaaaaaaaaaaaaaaaants ingonyaaaaaaaaaaaaaaaama-bagiiiiiiiiiiiithi baba !" kind ? sunset.c4dsunset.c4d
-
Possible to trigger an Effector's animation with another Effector?
HappyPolygon replied to pfistar's topic in Cinema 4D
I'm glad you've figured it out. Can't wait to see the final project some day. -
Train ? The engine is an AI ?
-
Possible to trigger an Effector's animation with another Effector?
HappyPolygon replied to pfistar's topic in Cinema 4D
Maybe the Time parameter is not for the effect you want. Have you tried the Delay Effector + a moving Field or a Field with oscillation for a repeated effect ? -
I don't see why this isn't possible... all I see in the plugin is a unified UI for strength per applied modifier like the cloner has for the effectors. Easy fixed with Xpresso.
-
What does U-Render have to do with exporting ? C4D already supports glTF export.
-
The example in the manual is this: I want the Scaled Blue-Noise and I don't care about the color. Why my setup doesn't work at all ?
-
Am I the only one who doesn't bother with the new layout because I use the old layout ?