The purpose of any HDL, hardware description language, is to be independent from EDA vendor (tools) and target tech (ASIC, FPGA). In principle and using inference, the code can be used on FPGA and ASIC. An example of inference is the “*” operator. The EDA tools will implement a multiplier according to the cells available (basic building blocks and resources in ASIC or FPGA). Essential is the tool chooses. Instantiating is not inference, it directly puts a tech specific module in the RTL code. This is no longer independent of tech. Usually wrappers around tech specific blocks can ease the swapping of tech specific cells or modules. This means in theory you can use the same code for FPGA and ASIC if all is inferred. If instantiated, you have to swap module content.
All this is generally speaking. Specifically, FPGA architecture is different for clocks and resets for one thing. An FPGA will be the fastest if data flipflops will be without reset, this makes the highest clock frequency possible. If a reset is needed, for example to have a safe FSM, the reset will be synchronous. In ASIC, the reset needs to be there for every flipflop because of design for test reasons. The reset is entered asynchronous, needs no clock to become active and exits synchronous. Hence there are peculiar things to consider that make FPGA code more difficult to migrate to ASIC, while the prototyping of an ASIC in FPGA is widespread. Some FPGA vendors have an FPGA to ASIC solution which is a mixed solution between a fully programmable and hard implemented ASIC.
Anyway, FPGA designs tend to be more error prone since the code is not simulated so thorough as ASIC code. An ASIC is so expensive that the verification must reduce the risk for a bug to the absolute minimum. While an FPGA can be reprogrammed hence a bug or fix is much easier to do (doesn’t mean it is easy to roll out in the field). ASIC is first time right oriented while FPGA is more relaxed (or perceived that way) because it can be reprogrammed.