Permutation Importance is specifically how much the model’s performance depends on a feature. It is a measure of how sensitive the prediction is to changes in the value of that feature. The higher the sensitivity, the greater the impact.
Permutation Importance measures the importance of a feature by calculating the decrease in the model score after permuting the feature. A feature is "important" if shuffling its values decreases the model score, because in this case the model relied on the feature for the prediction. A feature is "unimportant" if shuffling its values leaves the model performance unchanged, because in this case the model ignored the feature for the prediction.
Permutation Importance is an alternative to SHAP Importance. There is a big difference between both importance measures: Permutation Importance is based on the decrease in model performance, while SHAP Importance is based on magnitude of feature attributions.
A more detailed dive into Permutation Importance is available here.