In this article, we’re going to elaborate on the automated adaptive grid refinement feature in Fidelity Pointwise. It handles numerical errors and sticks to boundaries set by the user while accurately resolving all flow features for various applications.
Simulation preprocessing aims to create a mesh that fits the required analysis. You should prioritize computational efficiency in generating a mesh that accurately captures both geometry and physics. Depending on your simulation objectives, you can refine specific areas of the mesh where smaller flow features are expected.
In the illustration below, refinement zones have been incorporated around the vehicle to account for anticipated physics, particularly in the wake region. This process requires substantial domain expertise and relies on user input. Excessive refinement in regions with minimal flow physics would unnecessarily increase computation costs and simulation time, which is not what you would want.
When creating a mesh it should fulfill the following requirements:
In adaptive grid refinement, preservation of the boundary layer and near-wall physics is anticipated, alongside a gradual transition in cell sizes to ensure solver convergence. Defining an adaptation sensor is essential during grid refinement, outlining areas needing further enhancement. For external aircraft analyses, the Mach number serves as a suitable adaptation variable, whereas in turbomachinery scenarios, velocity magnitude proves effective as an adaptation sensor.
First off, we lay down a baseline mesh to kickstart the adaptive grid refinement in Fidelity Pointwise. Then, a solution is run on this mesh. You would want to evaluate the sensor at each edge. If it goes over your predetermined threshold at any spot, you’ll raise the adaptation flag. Next up, a point cloud is created, pinpointing locations, and setting a fresh cell size for the area. This cloud gets merged into the baseline mesh within Fidelity Pointwise to create an updated mesh. Rinse and repeat this process until your solution stands tall on its own, independent of the mesh.
In the figure above, we’ve got a cold jet swooping down onto a hot plate, with all the boundary conditions laid out for you.
The goal here is to do a side-by-side comparison of a fully structured mesh versus an adapted one. On the left, you’ve got the baseline hexahedron mesh, while on the right, you’ve got the initial unstructured mesh, ready to adapt to the velocity magnitude.
We’ve highlighted the scaling needed for the current edge lane in the image on the right. The adaptation process zeroes in on the area sandwiched between the jet and the plate. We’ve created a point cloud from the solution, and about a quarter of the nodes have hit the threshold and are due for some tweaking.
By the fifth cycle, roughly 70% of the nodes were flagged for adaptation, with a whopping 94% getting the mark by the final cycle. Once we hit around 90% adaptation, we call it quits on the iterations.
Looking at the mesh stats, it’s clear that the tweaked mesh boasts fewer nodes and elements compared to the meticulously crafted hex mesh. Zooming into the impingement region reveals that the initial mesh missed the mark, but with each cycle, it got closer to the experimental data.
Alright, buckle up! In this scenario, we’re testing out an Aachen Turbine (as seen in the illustration above) with 41 blades, spinning away at a brisk 3500 RPM. Let’s check out the flow conditions at the inlet and outlet, neatly laid out in the table below:
P total (inlet) | 169,000 Pa |
T total (inlet | 308 K |
A (inlet) | 49.3° |
P outlet | 135,000 Pa (average) |
And again, we’re keeping an eye on the velocity magnitude for our adaptation variable. Those shockwaves are front and center in our adapted mesh. Plus, look at the final adapted mesh—it’s spot-on in showcasing those secondary vortices and shocks.
Adaptive grid refinement isn’t limited to specific applications; it’s also valuable in automotive settings. In the illustration above, we’re examining the DrivAer model. Our focus is on the velocity magnitude as the adaptation variable.
A RANS simulation of the DrivAer model employs the SST two-equation turbulence model. Below, you’ll find the adapted mesh and the streamlines in the wake region. They illustrate a strong correspondence and effectively capture the vortices.
Let’s look at a DLR F6 model, a test case from the second AIAA drag prediction workshop. Flow conditions include a Mach number of 0.75 and a 1° angle of attack. We’re honing in on the Mach number as our adaptation variable here.
With adaptive grid refinement, those shockwaves atop the wing come into sharp focus. Check out the figure below, showcasing the initial and adapted surface pressures. The shocks get crisper with each adaptation cycle.
Now, when we look at the coefficient of lift and drag, it’s evident that refinement gets finer with every cycle. This mesh adaptation technique can seamlessly slot into any workflow.
While there’s a bit of legwork upfront to set things up, once established, it runs independently. The adaptation cycle consistently maintains the mesh’s topology, cycling back to the baseline.
If you want to learn more, Cadence has a free on-demand webinar on how to automatically generate the best mesh each time with adaptive grid refinement in Fidelity Pointwise. You can find it right here.
In this article, we’ll break down what SISO systems are all about, and why it’s a fundamental concept that forms one of the cornerstones of control theory. As engineers, we’re always looking for ways to make things faster and more reliable, why you’ll want to have this concept under your skin.
A SISO system is a configuration centered around a variable, featuring a single input and a corresponding output. It’s a scenario where a single control signal influences a single response. SISO systems are often used to model and control linear time-invariant systems, where the behavior doesn’t change over time and can be described by linear equations.
There are three essential methods for SISO system control: transfer functions, stability analysis, and feedback control:
Mathematically, SISO systems are typically described using differential equations, transfer functions, or state-space representations. Differential equations provide a dynamic description of the system’s behavior in terms of rates of change, while transfer functions offer a frequency-domain perspective that simplifies analysis. State-space representations describe the system using a set of first-order differential equations, making it easier to work with modern control techniques.
We’ll talk more about the SISO system control functions later in the article.
Left: SISO and MIMO control systems. Right: SISO radio system with single transmitter and receiver.
In a Single-Input Single-Output system, the input can vary widely depending on the specific application, encompassing factors such as voltage, force, pressure, or temperature. Similarly, the output reflects measurable responses such as velocity, displacement, or position, which are influenced by the input. At the heart of SISO systems lies the fundamental relationship between input and output, elucidating the cause-and-effect dynamics within the system.
Some examples of SISO systems include:
With industrial automation, for example, in manufacturing plants, SISO controllers regulate variables such as temperature, pressure, or flow rate. Think about your home thermostat—it’s a classic example. The input (chosen temperature) influences the output (room temperature), and the controller adjusts the system to maintain the desired condition. In environmental control, HVAC (Heating, ventilation, and air conditioning) systems are another classic example of exactly this.
If we look at consumer electronics such as audio systems, the volume knob acts as an input, and the resulting sound level is the output. Similarly, in washing machines, the cycle settings (the input) dictate the washing intensity and duration (the output).
In biomedical engineering and medical applications, SISO systems can model and control physiological variables like blood pressure, heart rate, or drug dosage. For instance, think about an insulin pump. It keeps track of the patient’s glucose level—that’s the input—and adjusts the insulin dosage accordingly.
The list of examples is long, and of course, there are many more than just the ones we’ve mentioned. But you probably have a good idea of the SISO principle by now, and we don’t want to bore you with endless examples. The important thing is, that you understand that single input = single output.
Transfer functions are a key concept of SISO control theory. They provide a concise representation of how a system responds to different frequencies of input signals. A transfer function is a ratio of the Laplace transforms of the output and input signals, often denoted as:
H(s) = Y(s)/X(s), where Y(s) and X(s) are the Laplace transforms of the output and input signals.
Transfer functions allow engineers to analyze system behavior in the frequency domain, enabling insights into frequency response, stability, and performance characteristics. They are particularly valuable for designing controllers that can shape a system’s behavior to meet specific requirements. For instance, engineers can use transfer functions to design filters that attenuate certain frequency components or controllers that ensure the system responds optimally to different inputs.
Stability is a critical concern in control systems, as an unstable system can lead to undesirable oscillations or even devastating failures. In SISO control, engineers have a couple of tricks up their sleeves to check for stability. One is the root locus method, and the other is the Nyquist criterion.
The Root Locus Method is a graphical technique that helps to visualize how a system’s poles (characteristic roots) change with different control gains. Engineers can evaluate the impact of the gain adjustments on stability and system performance by mapping the potential root locations in the complex plane.
Then there’s the Nyquist Criterion. This one’s also a graphical technique. It represents the relation of the system’s frequency response to its stability. It gives you a way to figure out if the system stats stable based on the number of encirclements of the critical point (-1) in the complex plane.
Feedback control is an important concept in SISO systems. In a feedback control loop, the system’s output gets measured, compared with what we want it to be, and then used to tweak the system’s input. This helps keep the system in check, aiming to reduce any gaps between the desired and actual outputs.
PID (Proportional-Integral-Derivative) controllers are a common example of feedback control. They tweak the control input according to the current error (The gap between the desired and actual outputs), how that error adds up over time, and how fast it changes. Think of it as fine-tuning a recipe to get the perfect dish. Just like adjusting the seasoning, you can tweak the PID controller’s settings to get the system to behave the way you want – whether that means getting it to settle down quickly, avoiding overshooting the mark, or just keeping things steady.
In a SISO radio system, you’ve only got a single antenna each for the transmitter and the receiver, lacking multiple antenna setups. When we talk about antenna technology, it can come in different forms, either indicating single or multiple inputs and outputs, all connected by the radio link. Think of it like a game of connecting the dots but with antennas and signals.
To break it down simply, the input resembles the transmitter, and the output resembles the receiver. The transmitter sends its’ signals into the link, which is then picked up by the receiver, located at the end of the wireless path.
There are various configurations of single or multiple antenna links, and they are categorized like this:
In short, a SISO setup excels in simplicity. It doesn’t demand the complexity of various diversity techniques. But, while they’re straightforward, the performance of SISO channels is constrained. Interference and fading can make a more substantial impact compared to MIMO systems employing diversity mechanisms.
We’ve elaborated on the concept of MIMO in our article ‘Exploring MIMO Technology in Antenna Design‘.
If you are doing any design for SISO radio systems, feel free to check out our AWR solutions. This tool can help you to seamlessly model, simulate, and optimize RF setups for various antenna setups and performance improvements.
In contrast to SISO systems, MIMO systems (Multiple-Input Multiple-Output) enable you to send multiple data streams from a transmitter/sender to a receiver. Through solid MIMO antenna design, you can swap multiple data streams within just a single frequency channel, boosting data throughput without needing extra spectrum resources.
When talking about implementation methods for MIMO, we want to highlight TMD, Spatial Multiplexing, and FDM.
The TMD (Time-Division Multiplexing) method broadcasts data into different channels during specific time intervals. The clear advantage of this method lies in the simplicity of implementation, as well as the fixed time slots. However, there’s a risk of reduced throughput when channels aren’t utilized simultaneously.
TMD can be combined with spatial multiplexing for improved performance. This method utilizes beamforming within subarrays to focus antennas on transmitting or receiving data in specific directions. It’s also used to transmit multiple data streams at the same time.
With spatial multiplexing, you benefit from enhanced throughput and control with directional communication. However, it can be more challenging due to the complexity of beamforming and the need for multiple antennas.
Lastly, we have FDM (Frequency-Division-Multiplexing), which broadcasts multiple data streams over different frequencies in a communication channel. This makes this method very effective for broadcasting different information. But for signal extraction, demultiplexing and filtering are required.
Here’s an illustration of how an antenna array uses beamforming to cater to multiple users: The encoder transmits signals through all the antennas at the same time, tweaking the phase and amplitude for each channel individually. This enables directional data transmission to multiple users, each working on different frequencies.
The first step in MIMO design is to configure the antennas. You’ve got a few choices: linear, circular, and planar arrays. These arrays come in two forms: uniform or non-uniform. With a uniform array, the antenna elements are uniformly spaced, while a non-uniform array lets you adjust the spacing to fine-tune how everything performs.
Let’s dive into the concepts of antenna beamforming, placement, and crosstalk.
Beamforming techniques vary in different forms including digital, analog, or hybrid approaches.
Analog beamforming sticks to the tried-and-true phased array setup, using phase-shifting transceivers to manipulate signals.
Digital beamforming simplifies things by managing signal manipulation digitally, which helps in organizing the layout and routing complexities on the PCB.
Lastly, the hybrid approach combines analog and digital techniques, offering a blend of analog broadcasting and digital pre-coding. This hybridization not only reduces the computational load but also simplifies the PCB layout.
When it comes to MIMO antenna design, one of the big things to think about is antenna placement. The best placement depends on various factors, like the PCB stack-up, what components you’re using and where they’re placed, your grounding strategy, and how you’re routing everything.
Usually, you’ll want to place the antennas at the edge of the board to maximize separation from digital components. This separation helps keep interference and crosstalk between the antennas and the digital parts to a minimum. Sometimes, especially in complex designs, you might even put some of the digital circuitry on a separate board to keep it even more isolated from the antennas.
Bad antenna placement can cause unwanted crosstalk issues. Crosstalk happens when antennas transmit signals that interfere with nearby digital channels. It’s like when someone’s playing their music too loud, and it starts bleeding into your room – not cool. This interference can mess with the quality of the signal, causing problems like more jitter and higher levels of noise, which ultimately leads to more errors in your data.
Now, usually, we tend to worry about digital crosstalk into analog channels, but the reverse situation can also be the case. For instance, noise from switching regulators can sneak into the digital channels and mess up the signal integrity.
So, when you’re designing your MIMO antennas, it’s important to consider precise placement and think carefully about all these factors. Getting it right means better performance and less unwanted interference and crosstalk.
The use of different polarizations (vertical, horizontal, slant, etc.) can work wonders in reducing interference between antennas. Dual-polarized antennas can effectively double the number of channels in a given space, which ramps up the system’s capacity.
CP MIMO (Circular Polarized) antennas are used more in wireless systems, from satellite and mobile communications to global navigation. Unlike their linearly polarized counterparts, CP antennas are less susceptible to polarization mismatch losses, which makes them perfect for long-distance and satellite communications where signal quality is everything.
The secret sauce of circular polarization lies in tweaking current distributions and phase shifts. For example, by playing with current directions and phase, you can switch between left-hand circular polarization (LHCP) and right-hand circular polarization (RHCP). This dual capability allows the antenna to efficiently manage polarization diversity, which is crucial for strong and steady wireless links.
When it comes to radiation patterns, antennas should be able to broadcast in all directions or focus their signal depending on what they’re used for. For mobile devices, omnidirectional patterns are usually preferred for uniform coverage. And don’t forget to optimize for diversity gain. This is achieved by having uncorrelated or minimally correlated signals at the receiver.
First off, there are various ways to cater to the need for wider bandwidths and minimize interference. For instance, by truncating the corners of patch antennas or utilizing dielectric resonator antennas (DRAs) with optimized coupling slots.
When it comes to spacing between antenna elements, it’s crucial to maintain a distance of at least half the wavelength (λ/2) of the signal to reduce mutual coupling and correlation. However, achieving this in compact devices can be challenging, leading designers to employ decoupling and matching techniques.
Ensuring sufficient isolation between antenna elements is crucial to diminish channel correlation and enhance MIMO performance. This can be achieved through various methods such as electromagnetic bandgap (EBG) structures, parasitic elements, or absorptive materials.
In terms of bandwidth considerations, MIMO antenna design must accommodate the required bandwidth for the communication standard. This can be achieved through the design of wideband or multiband antennas, utilizing techniques like fractal shapes, incorporating reactive loading, or employing tunable materials to adapt resonant frequency.
To summarize the article, MIMO systems enable you to send multiple data streams from a transmitter/sender to a receiver. We want to highlight the importance of how precise design choices not only can optimize performance and enhance connectivity – but also make or break your design.
We’ve written an article about MIMO’s counterpart, SISO, in our article ‘Understanding SISO Systems and Control Theory in Practice‘
If you are working with MIMO technology, simulation in the early stages of the process is crucial to predict performance and optimize the design before prototyping. Our AWR solutions can help you to get to where you need to be. Feel free to check them out.
The quality of a mesh can make or break a CFD simulation. Pointwise stands at the forefront of this domain.
Automation and customization are key aspects of Pointwise, enabling users to streamline their meshing processes. This is mainly achieved by implementing glyph scripting, particularly with Python.
Enter ChatGPT to the mix, and we can revolutionize the way we approach glyph scripting. In this article, we explore how ChatGPT can assist you in streamlining and enhancing the glyph scripting process for Pointwise meshing.
Are you in a time crunch to read the whole article?
Listen to the article’s main points instead and bookmark the article for later.
The Glyph scripting interface allows users to automate a wide range of tasks in Pointwise, from simple, repetitive processes to intricate, customized mesh generation strategies.
At its core, glyph scripting serves three primary purposes:
Python, with its user-friendly syntax and extensive libraries, is an ideal language for glyph scripting. Its versatility and power make it accessible to both beginners and experienced programmers, allowing for the creation of both simple scripts and complex meshing algorithms.
ChatGPT, powered by the OpenAI GPT-3 API, is a natural language processing model that can be used adjacent to Python and Pointwise. With its language understanding and basic coding capabilities, ChatGPT can assist users in several ways:
Consider a simple scenario where you need a script to create a structured grid. While this task might be daunting for beginners, ChatGPT can help generate the initial script or refine an existing one. This AI assistant understands your requirements and translates them into efficient, executable Python code. Such assistance not only saves time but also minimizes potential errors, making the scripting process smoother.
Imagine you need to create a basic structured grid. Typically, this task might involve a fair bit of manual coding. However, with ChatGPT, you can quickly generate a script like the following:
# Python script for basic structured grid creation in Pointwise
from pw import *
# Create a structured grid block
block = pw.Application.createBlock()
# Define the dimensions of the grid
block.setIDimension(50)
block.setJDimension(50)
# Set the grid distribution
block.setIDistribution("Uniform")
block.setJDistribution("Uniform")
# Assign grid points (Example coordinates)
block.setGridPoint(0, 0, [0, 0, 0])
block.setGridPoint(49, 0, [1, 0, 0])
block.setGridPoint(0, 49, [0, 1, 0])
block.setGridPoint(49, 49, [1, 1, 0])
# Generate the grid
block.generate()
This script is a basic example. By providing more detail in your input, ChatGPT can help create, tailored to your specific requirements for grid size, distribution, and dimensions.
For a more advanced scenario, such as automating mesh refinement based on specific flow features, ChatGPT can assist in conceptualizing a more complex script. Here’s an example of what that might look like:
# Python script for automated mesh refinement in Pointwise
from pw import *
import numpy as np
def refineMeshBasedOnFlowFeature(block, featureThreshold):
# Function to refine mesh based on a flow feature threshold
# Analyze the block for flow features
flowData = analyzeBlockForFlowFeatures(block)
# Refine the grid where the flow feature exceeds the threshold
for i in range(block.getIDimension()):
for j in range(block.getJDimension()):
if flowData[i][j] > featureThreshold:
block.refineCell(i, j)
def analyzeBlockForFlowFeatures(block):
# Placeholder function to analyze flow features in the block
# In practice, this would involve complex flow analysis logic
return np.random.rand(block.getIDimension(), block.getJDimension())
# Example usage
block = pw.Application.createBlock()
# Set dimensions, distribution, and points as required
refineMeshBasedOnFlowFeature(block, 0.5)
This script represents a more sophisticated approach where the mesh is refined dynamically based on flow characteristics, a task that ChatGPT can help script and optimize.
Anyone who has played around with AI and ChaGPT knows that the ouput you get is only as good as the input you give. When using ChatGPT or similar AI models to assist in scripting or problem-solving, we recommend keeping these best practices in mind:
And without further ado:
The world of Pointwise and Python scripting is a vast one. Don’t hesitate to explore the thriving online communities, forums, and resources dedicated to these topics. Sharing your experiences and seeking support from fellow enthusiasts can be immensely beneficial.
ChatGPT can be a game-changer for Python scripting in Pointwise. It not only enhances automation but also offers invaluable user assistance. As you integrate ChatGPT into your workflow, you’ll discover new ways to streamline meshing operations and troubleshoot scripting issues efficiently. Embrace this exciting synergy between AI and CFD for a brighter meshing future.
Nordcad guides you in the right direction when it comes to optimizing your design process. We provide valuable know-how to strengthen your CFD efforts every step of the way. You will gain more transparency and better utilization of both your time and your budget.
Choosing the right software for your future projects can be daunting.
We have experienced this countless times with potential customers browsing for the best software options. And we recognize the challenge.
Today, whenever we discover the need for a new device or tool, or to migrate/upgrade from an existing one, we take on the research on our own. We vacuum online sites for useful information, ask around to gain insights from experience we don’t have first-hand, and start to put the pieces together. But even then, there are lots of gaps to be filled and many questions left unanswered.
Firstly, we need to change the question we are looking for an answer to.
We hate to burst your bubble, but there is no silver bullet in this. No one-size-fits-all, holy grail of electronics development software.
Instead of searching for what “the best” software option is, ask what the right option is for you.
It all depends on your specific needs and requirements to that specific project, you are working on.
In this article, we will give you an honest comparison of Altium vs OrCAD X with pros and cons of both design platforms.
Sign up to receive more detailed info about the new platform and experience its many new features.
Before we get into it, let’s layout why choosing the right software is so challenging and what you need to consider in the process.
Electronics development isn’t a monolith; it’s a vast realm with applications that range from crafting consumer gadgets to engineering intricate medical devices or industrial control systems. This diversity demands specialized software tools tailored to each unique application.
When browsing for the right software solution for you, consider this:
As you can see, each piece of the puzzle represents a unique challenge, and every engineer aims to put them together to bring their project to life. We’ve explored the critical aspects to consider prior to making any further decisions.
Now we’re about to delve into the nitty-gritty details of the functionality offered by Altium Designer and OrCAD X, and you might be surprised. The line between these two software options isn’t always as distinct as you’d think, as both have evolved to offer impressive capabilities.
Schematic Capture | Altium | OrCAD X |
Hierarchical schematic capture with integrated analog mixed-signal simulation | y | y |
Built—in SPICE-simulator with model library and migration support for LTSpice/PSpice | y | y |
Variant management tools for complex projects | y | y |
Crypted simulation models (PSpice | x | y |
Integrated pre-layout signal integrity analysis | x | y |
PCB Layout | Altium | OrCAD X |
High speed design and rigid-flex design features included in standard license | y | y |
Rules-driven design for modern PCB (HDI, Rigid-flex, and Multiboard) | y | y |
Advanced interactive routing features with built-in signal integrity tools | y | y |
Online 3D clearance checks in 2D and 2D canvas | y | y |
Photo-realistic 3D viewer with real-time clearance checking | y | y |
Integrated impedance and coupling analysis | x | y |
Quick and dynamic updates of copper under placement and routing | x | y |
Visual DC power distribution analysis in the PCB layout | Requires Power Analyzer | Requires Sigrity OptimizePI |
Library Management | Altium | OrCAD X |
Built-in BOM management | y | y |
Unified libraries with symbols, footprints, simulation models, and distributor links | y | y |
Management of lifecycles, in-design validation, templates, and where-used for libraries | Requires Altium Subscription | y |
Manufacturing | Altium | OrCAD X |
Built-in fabrication and assembly drawing generation tools | y | y |
Intuitive interface for manufacturing output creation | y | y |
Quickly generate ECOs, reports, and documention | y | y |
Share and view Gerbers, ODB++ files, and assembly steps in a browser | y | x |
Product Design | Altium | OrCAD X |
Support for multi-board systems and assemblies | Requires Altium Subscription | y |
Multi-board 3D product viewer from any web browser | Requires Altium Subscription | y |
Multi-board mechanical integration for SOLIDWORKS® 2020, 2021, and 2022 | Requires Altium Subscription | y |
Cloud Services | Altium | OrCAD X |
Version control, templates, and component libraries stored in cloud | Requires Altium Subscription | y |
Built-in task management and commenting for design files and BOMs | Requires Altium Subscription | y |
Concurrent PCB team design internally and externally | Requires Altium Subscription | y |
Intuitive co-design internally and externally via work groups | x | y |
Platform Integrations | Altium | OrCAD X |
Advanced integration with SOLIDWORKS® for MCAD component placement, copper geometry, enclosure exchange, and multi-board/rigid-flex sync | Requires Altium Subscription | y |
PLM integration with Arena® PLM, PTC Windchill, Aras PLM, Oracle Agile PLM, and Teamcenter® | Requires Altium Subscription | Additional service |
PLM integration to Highstage PLM | x | y |
Multiple importer utilities to convert files from other ECAD platforms | Requires Altium Subscription | y |
Workflows | Altium | OrCAD X |
Managed workflows for part reguests, design reviews, project creation/release, and lifecycle approvals | Requires Altium Subscription | y |
Change notifications and change conflict prevention for design documents | Requires Altium Subscription | y |
Centralized control over design environments, document templates, and output files for every team member | Requires Altium Subscription | y |
Maintenance and Support | Altium | OrCAD X |
Backwards compatible with files from all previous releases | y | y |
On-demand training courses, knowledge base, and community forums | Requires Altium Subscription | y |
Live chat and support tickets | Requires Altium Subscription, which includes support tickets, chat function, documentation cnter and access to Altium community forums | Nordcad customer service includes local hotline support, chat function, access to COS (Cadence Online Support), software downloads and free online resources. |
Platforms | Altium | OrCAD X |
Windows 10 and newer | y | y |
MacOS 10.15 and newer | x | x |
Linux | x | y |
Licenses | Altium | OrCAD X |
Time-based subscription | y | y |
Perpetual license options | y | y |
Multiple subscriptions options | y | y |
Let me guess – you can’t spot a distinctive difference. We don’t blame you. Altium and OrCAD X are both extremely powerful and reliable tools.
Altium has without question established itself robustly on the mainstream market for its undeniable ease-of-use and its capabilities for a streamlined workflow in an all-of-one platform. But with OrCAD X entering the room, how do these two giants actually compete against each other?
Since the functionalities and the (updated) user experience closely resembles, we’ll have to zoom out and consider the wider advantages and disadvantages of both platforms.
Please note that the choice between these PCB design tools often highly depends on your specific project requirements, budget, and personal preferences.
Each tool has its strengths and weaknesses, and the best choice for you may vary depending on your needs, experience level, and the complexity of your PCB designs.
We always advise to try out free trials and/or requesting a demo to evaluate each tool thoroughly before making a decision.
As we’ve explored in this article, both of these software solutions bring unique strengths and weaknesses to the table, making it essential to carefully evaluate your project requirements, budget constraints, and personal preferences.
The choice between Altium vs OrCAD X is not merely a matter of features or a checklist of pros and cons.
While Altium may excel in certain aspects, OrCAD X could better suit the requirements of a different project. The key takeaway here is the importance of taking the time to thoroughly assess your needs and objectives before making a choice.
We strongly recommend that you take advantage of the free trials and demos offered by both Altium and OrCAD X. This hands-on experience will allow you to gain firsthand insight into how each tool aligns with your specific design goals.
To help guide your decision-making process, consider the following factors:
By taking these considerations into account, you can make an informed decision that maximizes your productivity and project success.
Remember that the right choice is highly individual and can greatly affect the outcome of your PCB designs. So, whether you lean toward Altium or OrCAD X, the key is to choose the tool that empowers you to bring your design visions to life effectively and efficiently.
Altium vs OrCAD X, Altium vs OrCAD X, Altium vs OrCAD X, Altium vs OrCAD X, Altium vs OrCAD X, Altium vs OrCAD X, Altium vs OrCAD X
It is not uncommon for CFD engineers to simplify complex shapes to make meshing easier. While this can save time, it often sacrifices the precision of their simulations, impacting performance.
Over the years, we’ve seen considerable efforts to develop algorithms that automate mesh generation, enhancing accuracy and reducing manual intervention. Take, for example, healthcare devices such as artificial tricuspid valves – the need for precision here is paramount, as human lives are at stake. Automated mesh generation can significantly cut down the time spent on repetitive tasks, a boon in such critical scenarios. Let’s find out how.
CAD preparation (geometry and cad cleanup) is the most tedious and time consuming part of the CFD process, generally amounting to 80% of the total time spent on a CFD simulation.
When preparing for a CFD simulation, engineers face several pain points:
Maintaining the intricate features of complex geometries while meshing can be difficult due to the need for geometry precision. Ensuring that the mesh accurately reflects the geometry’s intricacies often involve manual adjustments and refinements. Simplifications or approximations may lead to inaccuracies in the CFD simulations.
Creating high-quality meshes for complex geometries requires careful consideration of cell size, shape, and connectivity. And manually creating these can be both time-consuming and incredibly tedious, especially for large geometries.
Engineers often spend a substantial amount of their time on mesh generation alone.
Generating meshes that accurately capture flow phenomena like separation, shear layers, and stagnation points requires well-structured and high-quality cells. Ensuring these qualities can be challenging due to the complex geometry’s impact on mesh quality.
Engineers may need to spend additional time refining and optimizing the mesh to achieve the desired quality. Poor mesh quality can lead to inaccurate results and convergence issues.
Determining where and how to refine the mesh to ensure accuracy in simulations, particularly in areas with complex flow patterns, can be challenging. This process involves identifying regions that require refinement based on solution errors.
The iterative nature of mesh adaptation and refinement, where the mesh is adjusted based on simulation results, can extend the simulation setup time.
Simulating scenarios where the geometry changes over time, such as moving vehicles or rotating machinery, requires dynamic mesh generation solutions that adapt to these changes. Implementing such dynamic meshes can be technically challenging.
The benefits and limitations of manual vs. automated mesh generation speak for themselves:
Mesh generation is akin to crafting a precise blueprint for a complex structure, ensuring stability and accuracy. Yet, it’s far from a straightforward task. Let’s delve into five distinct mesh generation challenges that confront every CFD engineer. These hurdles demand careful consideration, but with the right strategies, they can be overcome – don’t worry, we’ll show you how.
Creating an impeccable mesh has traditionally been an art mastered over years. However, the prospect of software that respects design geometry and minimizes the learning curve offers the promise of enhanced efficiency and flexibility.
The choice between structured and unstructured grids hinges on the complexity of the geometry at hand. While structured grids excel in efficiency, they often fall short when dealing with intricate shapes. Unstructured meshing steps in to address this challenge.
Achieving excellence in fields like automotive design necessitates a meticulous approach. This involves refining the mesh in critical areas, including boundaries, walls, and off-body regions, to capture intricate flow physics.
Traditional mesh generation methods can falter in the face of high Reynolds numbers and complex eddies due to the sheer volume of cells required. High-order meshing, coupled with high-performance computing, presents a potential solution, albeit with cost considerations.
In applications like turbomachinery, where fluid domains evolve dynamically, maintaining solution quality is paramount. This requires the deployment of moving mesh algorithms to keep pace with fluid dynamics.
Now, having a visual projection of how your geometry is going to be meshed and discretized, before running any sort of mesh generation tool, is very helpful. This allows you to verify all the inputs of your meshing setup without having to run anything. And in some regions, a high level of detail is going to have critical impact on your CFD simulation and its reliability.
Because at the end of the day, you want to make sure that what’s on your screen corresponds as much as possible to what’s going to be ending up in your wind tunnel or in your test track.
One of the most efficient ways to cut down on time and to maintain the highest level of accuracy and reliability in your results is to have it all under control within the same environment with the possibility to make it fully automatic. This way, you can seamlessly integrate it into your existing workflow according to your needs.
With Pointwise, no compromises are made. Our track record so far is reducing pre-processing time from more than three weeks to 1-2 days.
As you can see, Pointwise offers a suite of built-in tools designed to simplify the grid discretization workflow for complex geometries.
Let’s explore some of its key offerings:
1. Flashpoint: Automated Surface Meshing Tool: With minimal input required, Flashpoint automatically generates surface meshes. This tool is especially handy when you need to capture complex geometry curvature, like the leading edge of an aircraft wing.
2. T-Rex: Anisotropic Near-body Meshing Tool: T-Rex specializes in near-body or boundary layer meshing. It excels at handling symmetry boundaries, sharp edges, and thin surfaces. It generates layers of prisms and hexahedra to resolve near-wall flows and automatically adjusts extruding layers to avoid collisions.
3. Voxels: Automated Off-Body Surface Meshing: Voxels offer high-quality, uniform cells for off-body meshing. This tool removes voxels intersecting with geometry and excels in both internal and external flow geometry.
4. Mesh Adaption: Automated Tool for Refinement: Mesh Adaption refines meshes where needed based on flow solution error estimates. It’s great for achieving uniform and high-resolution meshing, especially in off-body regions.
5. High-Order Curved Meshing: This advanced technology leverages High-Performance Computing (HPC) to generate meshes with fewer elements, high accuracy, and reduced memory usage. It’s a game-changer for challenging scenarios like high Reynolds numbers.
6. Overset Meshing: Suitable for moving body applications, overset meshing recomputes mesh connectivity for changing fluid backgrounds. This method shines in turbomachinery, capturing moving body physics efficiently.
Accurate meshing is crucial, as it impacts numerous applications with real-world implications. Cadence Pointwise’s diverse meshing technologies make it suitable for a wide range of domains, from turbomachinery to medical applications. It offers a flexible solution for automating complex grid discretization workflows without compromising accuracy.
With automated meshing, you:
Nordcad guides you in the right direction when it comes to optimizing your design process.
We provide valuable know-how to enhance your CFD efforts at every step of the way.
You will gain more transparency and better utilization of both your time and budget.
In PCB design, engineers constantly face the challenge of ensuring optimal performance and reliability of their designs.
Two key methods employed in this process are testing and simulation. These techniques play vital roles in validating designs, uncovering potential issues, and optimizing the overall functionality of PCBs.
Testing involves physically evaluating a PCB design to verify its functionality and performance. This process typically involves the following steps:
Test setups may include environmental, electrical, and functional testing, among others.
The main advantage of testing is its ability to provide accurate real-time results by directly measuring the physical characteristics of the design.
Additionally, you are also able to uncover issues that may not be apparent through simulation alone.
By physically interacting with the prototype, it allows the engineers to validate the performance of the design in real-world scenarios, accounting for factors such as noise, signal integrity, and thermal effects.
However, testing also comes with its two main limitations.
Testing can be very time-consuming and costly, involving the manufacturing of prototypes, setting up test equipment, and conducting comprehensive testing procedures.
Additionally, some issues may only become apparent during the later stages of the testing process. Which can potentially lead to delays in the development cycle.
We all know that time is money, and multiple delays will hurt your company wallet.
Not only in terms of development but also in terms of materials and other internal resources.
Some advanced testing methods are a good example of this.
Take a method such as reliability testing or in rare cases, destructive testing.
Both may require sacrificing prototypes, which makes it impractical for large-scale manufacturing or costly for limited resources.
Physical testing has its natural limitations, as it cannot always cover all possible scenarios and edge cases.
This leaves room for potential issues to arise in specific situations that were not tested.
Moreover, your prototypes may not fully represent the final product in terms of materials, manufacturing processes, or component variations.
This potentially leads to discrepancies between the tested prototype and the actual final product.
Maybe testing isn’t feasible or practical for your specific designs, as it may be involving high frequencies, complex interactions, or unsafe environments.
In such cases, the effectiveness would be limited.
In traditional PCB workflows, simulation methods aren’t as advanced, precise, and frequent as they are today.
When the specifications of the design are confirmed, engineers would move on to:
After the PCB assembly, you would begin to test the design to verify that it works as intended.
But if it doesn’t work and you’ve made an error, then the trouble begins.
You’ll have to troubleshoot what caused the error, and that can be very time-consuming, because you may not have any direct indications of the root cause.
Unfortunately, these are some calculated risks that many engineers deal with on a regular basis.
This results in wasted time, money, and delayed manufacturing.
This is exactly why simulation has become an invaluable tool.
A simulation is a virtual approach for validating your PCB design.
Basically, it involves using specialized software to model the behavior of the PCB and its components based on mathematical algorithms and physical models.
By inputting various parameters and conditions into the software, engineers can simulate the electrical, thermal, and mechanical characteristics of the design.
And additionally, predict its performance without the need for physical prototypes.
There is no doubt that simulation comes with certain advantages.
Simulation allows engineers to quickly iterate and evaluate different design options and enables faster design cycles. Without the need for physical prototypes.
The possibility of exploring different design options and evaluating their impact on performance allows for optimization and fine-tuning of the PCB before committing to the actual manufacturing of the design.
Simulating the PCB under various conditions, such as electrical, thermal, and mechanical stresses, allows you to predict and optimize performance characteristics.
This leads to valuable insights into the behavior of the PCB design, as well as important feedback regarding component status, datasheets, and updates.
Through simulation, engineers can identify and address potential issues early in the design process.
Early simulations help potential risks and failure modes, enabling proactive measures to be considered, improve reliability, and prevent design flaws before manufacturing.
This can reduce costs that would otherwise be incurred in prototype revisions, and physical testing, and reduce the development costs associated with manufacturing. Such as materials wasted, time, the salary of employees, lost revenue, etc.
Avoiding critical design errors early minimizes the risk of expensive mistakes and poor quality of the finished product overall.
First, let us address the learning curve that comes with implementing simulation software into your toolkit.
It’s not exactly a limitation, but we want to prepare you if your company chooses to implement simulation in the future.
Like with all other beginnings, learning new software takes time.
Time and education you need to invest in to gain invaluable know-how, you can leverage for all eternity.
Nevertheless, simulation can have its limitations if not used in a thoughtful manner, and if you don’t know what you are working with.
It heavily relies on accurate models and assumptions, which can introduce errors if not carefully validated.
The accuracy of simulation results is directly dependent on the quality of the input data and the model’s ability to accurately represent real-world conditions.
Additionally, simulation tools may not account for all physical phenomena, leading to certain limitations in accurately predicting the behavior of complex designs.
A modern workflow for PCB design is of cause, not surprisingly, based on traditional methods, but with a more strategic and methodical approach to simulation.
Below you’ll see a rough illustration of a modern workflow.
It shows how the simulation works as a gatekeeper preventing errors from progressing to key activities in the workflow.
Every time a key activity like requirement specializations, schematic, or layout, has been completed, you would want to simulate the design for potential errors or issues.
This gives you early indications of things you otherwise would have discovered later (too late) in the validation process.
With simulation, engineers don’t have to live with the constant fear of potential rework.
And managers won’t have to include “rework” as a calculated expense in their budgets.
📺 Webinar: Save time and start using simulation on your circuits
An overview of the benefits of using simulation and how to utilize tools to avoid mistakes.
📺 Webinar: Benefits of nonlinear simulations for your RF design
How to predict performance and optimize the power consumption of your designs.
First, let’s have a quick recap.
In the illustration below, you’ll see a thought-out example of how an investment in simulation would look like.
As mentioned earlier, a PCB workflow without simulation will have higher costs, because of the potential reworks and waste.
Don’t let the initial steep curve of simulation scare you.
The long-term wins of implementing simulation outweigh the cost of time spent on education and acquiring the tools.
And even better, you only have to make the investment once.
In conclusion, both testing and simulation are essential tools for achieving PCB design success.
Understand that the implementation of simulation doesn’t mean the exclusion of testing.
In fact, testing is still mandatory.
But simulation will set you up for success in the long run, if used appropriately.
Here are our top three recommendations for using both approaches together.
Begin the design process by leveraging simulation tools to explore different design options and evaluate their performance characteristics.
This allows for faster iterations and cost-effective optimization before committing to physical prototypes.
The simulation will help identify potential issues early on and guide the rest of your design decisions and improve overall performance.
Once a design has been simulated and optimized, it is crucial to validate the simulation results by conducting targeted testing on physical prototypes.
This helps to ensure that the simulation accurately represents the real-world behavior of the PCB.
Focus on testing specific aspects or critical areas identified during simulation.
Do this to verify the simulation’s accuracy and identify any discrepancies or unforeseen issues.
Testing plays a vital role in validating the design’s performance under real-world conditions.
After initial simulations and optimizations, prototype PCBs should undergo comprehensive testing to evaluate their functionality, reliability, and adherence to industry standards.
Factors such as environment, component variations, or complex interactions may not have been uncovered in the simulation, because they haven’t been accounted for in the initial input data.
The results from testing can then be used to refine the simulation models and improve future designs.
Nordcad points you in the right direction when it comes to optimizing your design process.
We provide valuable know-how to fuel your PCB efforts every step of the way.
You’ll get more transparency and make better use of both your time and budget.
Click below to book a demo 👇
SPICE simulations are a well-known simulation tool for accurate circuit design. However, for those who still rely on intuition and experience, it’s overdue to embrace PCB simulation software and explore its potential to promote less faulty and more sustainable designs.
While the sustainability agenda has primarily focused on large cooperations and companies, small and medium-sized enterprises also have a significant role to play in reducing their carbon footprint. Simulation software offers a great value for promoting sustainability in the manufacturing industry by addressing areas such as efficiency, impact, waste, design reliability, and supply chain resilience.
SPICE simulation in particular has the potential to play a crucial role in promoting sustainable design practices for electrical products. With increasing demand for energy-efficient and eco-friendly products, it’s essential for engineers to adopt new technologies and strategies to meet these expectations. How, you might ask? Let’s find out.
SPICE simulation simply revolutionizes electronic circuit design and analysis. Its ability to accurately model intricate circuits empowers engineers to analyze performance parameters such as voltage, current, power dissipation, and frequency response.
By simulating circuits before physical prototyping, fabrication or manufacturing, SPICE simulation reduces both time and costs associated with design iterations. This iterative process aids in:
Some of the types of simulation that can be performed include:
Analyzes AC circuits, as well as circuits with nonlinear components and arbitrary waveforms
Calculates the DC current in a circuit as a function of DC input voltage
Calculates the circuit response in the frequency domain, such as for a filter or impedance-matching network
Involves varying a specific parameter in the circuit across a range of values as part of another simulation
By simulating the behavior of the circuit under different conditions, SPICE simulations can help engineers and designers to optimize circuit designs, identify potential issues and improve overall circuit performance. And the best part? It’s a cost-effective method to test and validate new designs without the need for physical prototypes.
Cadence® PSpice® is the superhero variant of the SPICE simulator that offers an array of enhanced features and capabilities for PCB design. PSpice extends the functionality of SPICE and provides designers with a comprehensive toolset to simulate, analyze, and optimize electronic circuits.
Now, you may know that PCBs often play host to a mix of analog and digital circuitry. One notable feature of PSpice is its ability to perform mixed-signal simulations, allowing you to simulate the interaction between these different domains accurately.
By combining analog and digital simulation capabilities, PSpice enables designers to validate the integrity of mixed-signal designs, ensuring proper functionality and minimizing potential signal integrity issues.
Let’s say, you’ve got this complex design, maybe an IC or a SoC, and you don’t want to get bogged down in the nitty-gritty transistor-level details.
PSpice supports behavioral modeling, allowing designers to create and simulate complex circuit components using high-level behavioral models instead of detailed transistor-level representations.
Behavioral models can significantly reduce simulation time and enable rapid prototyping and optimization of complex designs.
PSpice truly brings the cool optimization tricks to the table. It incorporates advanced optimization techniques that enables you to refine your designs for specific objectives. You can define optimization goals, such as minimizing power consumption, maximizing signal-to-noise ratio, or meeting specific performance criteria, and PSpice will automatically adjust design parameters to achieve those goals.
This optimization process can help identify the most efficient component values, optimize circuit layouts, and fine-tune designs to meet desired specifications.
Say goodbye to manual trial-and-error!
Design for real-world scenarios and applications can be a real challenge, right? With PSpice, you are already one step ahead.
Amongst many standard types of analyses, such as transient and frequency domain analysis, PSpice also facilitates more advanced analyses, such as sensitivity analysis, Monte Carlo and Frequency Domain analysis.
These analysis types ultimately help ensure robustness in the face of manufacturing variations and external influences, leading to more reliable and resilient designs.
By leveraging the advanced capabilities of PSpice, you can gain a deeper understanding of their circuit designs, evaluate performance under various conditions, and optimize designs for efficiency, reliability, and functionality.
In the pursuit of minimizing e-waste, PCB designers hold a pivotal role as the vanguards of environmentally sustainable practices. The utilization of SPICE simulation offers an advanced arsenal of tools to achieve this crucial objective. So, as promises – here are 3 ways to unlock the power of SPICE simulation:
The framework is quite simple:
Early design validation and fault analysis
=
Less costly rework and hardware failures.
Picture this: you’re in the early stages of designing a PCB. You are reusing a previous design, making adjustments and alterations. And you have enough experience under your belt to feel confident about the design – or you might just be in a time crunch. Let me challenge you on this.
Instead of jumping straight into physical prototypes or moving on to fabrication and manufacturing – risking costly rework – let SPICE simulation come to the rescue! You can simulate the behavior of your circuit under different conditions. It’s like having a crystal ball that helps you spot potential issues before they become real-world headaches. By catching those problems early on, you can save resources, minimize waste, and ensure your design hits the mark from the get-go.
Interested in learning more about this?
I hosted a webinar about the undeniable advantages of implementing simulation to your design process. In just 15 minutes, you will get real-world cases and data on the impact of simulation.
Because let’s face it – we all know that even the most meticulously designed circuits can sometimes encounter unexpected hiccups. The ability of SPICE simulation to analyze circuit behavior under fault conditions proves invaluable in reducing e-waste. By simulating fault scenarios such as short circuits, open circuits, or component failures, designers can identify weaknesses in the design and implement appropriate remedial measures. This proactive approach significantly mitigates the risk of field failures, prevents premature discarding of devices, and contributes to the overall reduction of e-waste. And saves money and working hours.
Analyzing your design allows for the identification of power-hungry components or circuit sections, enabling you to implement energy-efficient design strategies such as power gating, clock gating, or voltage scaling. By reducing unnecessary power consumption, designers can extend battery life, reduce energy waste, and promote sustainable electronics.
PSpice also enables simulation and analysis of dynamic power management techniques such as dynamic voltage scaling (DVS) or dynamic frequency scaling (DFS). By dynamically adjusting voltage levels or operating frequencies based on workload demands, designers can achieve significant energy savings without sacrificing performance. PSpice provides a platform for evaluating the effectiveness of these techniques, allowing you to minimize energy consumption.
Efficient thermal management is essential for maintaining the reliability and longevity of electronic devices. Excessive heat not only affects performance but also leads to energy wastage. PSpice facilitates thermal analysis by simulating and predicting temperature profiles within the circuit. By identifying hotspots and areas of inefficiency, you can optimize heat dissipation strategies, improve cooling mechanisms, and reduce energy loss due to thermal inefficiencies. This proactive approach to thermal management ensures energy-efficient operation while extending the lifespan of electronic devices.
In the quest for sustainable and environmentally friendly PCB designs, the utilization of reusable materials is of utmost importance. Let’s dive deeper into how SPICE simulation and PSpice can be our trusty sidekicks in optimizing material usage and fostering a circular economy.
Embrace the power of SPICE simulations for sustainable PCB design. Reduce e-waste, boost energy efficiency, and optimize materials with precision. Let’s revolutionize the electronics industry and create a greener future.