I prefer a profile to keep the connectivity details; because periodic database check/access brings excessive overhead into your system performance I would suggest the following which I think we can argue over it
If you can use a middleware component (lets call it an event handler) that reads the tunneled events and populates them into database, and at the same time send a signal (internal event) to a coordinator which can distinguish between event types. This is a message consumer coordinator
Coordinator will then (based on the type of event) can notify a corresponding message consumer of existence of a new event, and the MDB can then start processing it. (Notification = pushing the event into the queue of the MDB)
On startup of the application, message consumer can query database for events that are already persisted and not processed (probably based on time) by interfacing with database by probably running a sql command or store procedure to read the event.
This means that message consumers are already running and subscribed, and if the traffic gets more loaded you can use some load balancers to balance the load over number of similar consumers. Of course you have MDB per event type (this is what I read from your requirements below). Because single MDB will be blocked until one of its own messages is processed, you can use some dynamic behavior to avoid this. This would be also helpful; to avoid any failure in case an unknown event type has been received.
à Event handler à persist in DB + signaling Coordinator -> message consumer (no DB access) -> event processor
Start up:
Message consumer -> DB to collect events and process
This does not have frequent DB access which is a benefit