Replies: 4 comments 7 replies
-
You publishing concurrently with other operations on a shared channel. No client supports that. It's been discussed hundreds of times on So look for places that need synchronisation in your code. Or delay publishing by N seconds before topology is set up. Or something like that. |
Beta Was this translation helpful? Give feedback.
-
As I generally expected, a case *basicReturn:
ret := newReturn(*m)
ch.notifyM.RLock()
for _, c := range ch.returns {
c <- *ret
}
ch.notifyM.RUnlock() and case *basicAck:
if ch.confirming {
if m.Multiple {
ch.confirms.Multiple(Confirmation{m.DeliveryTag, true})
} else {
ch.confirms.One(Confirmation{m.DeliveryTag, true})
}
}
case *basicNack:
if ch.confirming {
if m.Multiple {
ch.confirms.Multiple(Confirmation{m.DeliveryTag, false})
} else {
ch.confirms.One(Confirmation{m.DeliveryTag, false})
}
}
// ...
func (c *confirms) One(confirmed Confirmation) {
c.m.Lock()
defer c.m.Unlock()
c.deferredConfirmations.Confirm(confirmed)
if c.expecting == confirmed.DeliveryTag {
c.confirm(confirmed)
} else {
c.sequencer[confirmed.DeliveryTag] = confirmed
}
c.resequence()
} I can only speculate what kind of behavior you see since there never was a code example (some published message properties are not relevant) but at least one the methods is handed over to a Go channel, which by definition can create a race condition with |
Beta Was this translation helpful? Give feedback.
-
Where do you go from here? We do not guess in this community.
then you would have something to work with. |
Beta Was this translation helpful? Give feedback.
-
Unfortunately I don't have a reproducer. The error in question has happened a grand total of once - so a tcpdump is out. What I was hoping to get were some ideas for what could realistically be happening, given the IMO vanilla usage, so that I could write a reproducer application that would hit the problem. Anyways, I guess I'll try some stuff, see if I can repro. |
Beta Was this translation helpful? Give feedback.
-
I've been a long-time user of the streadway/amqp library. Recently updated to this repository.
Since then, I have one instance of the error in the subject being sent as a close notification (Exception (505) Reason: "UNEXPECTED_FRAME - expected content header for class 60, got non content header frame instead"). This happened very shortly after program start (within the first second).
Searching around online, the most common explanation seems to be "you're using concurrency, but the library isn't designed for concurrency". However this library has a lock on
Publish()
, which serializes everything. I don't think the explanation is so simple in this instance.My usage is fairly vanilla. There are pub-sub "fanout" exchanges. The publish happens with
The message itself is binary. It may be small (several bytes), or it may be large (100kb+, but probably not as big as 1MB).
What I'm looking for are some plausible theories as to how this could happen. Is there some sort of race at startup? Are we meant to wait for something after
ExchangeDeclare
? We also have a consumer subscription on the sameChannel
. I don't think consuming messages can cause this sort of thing, but maybe in some weird instance?Is there some additional information I should be looking at?
Beta Was this translation helpful? Give feedback.
All reactions